Test Report: Docker_Linux_crio_arm64 21975

                    
                      bf5d9cb38ae1a2b3e4a9e22e363e3b0c86085c7c:2025-11-24:42481
                    
                

Test fail (36/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.45
35 TestAddons/parallel/Registry 17.21
36 TestAddons/parallel/RegistryCreds 0.51
37 TestAddons/parallel/Ingress 144.31
38 TestAddons/parallel/InspektorGadget 6.28
39 TestAddons/parallel/MetricsServer 5.36
41 TestAddons/parallel/CSI 51.65
42 TestAddons/parallel/Headlamp 4.23
43 TestAddons/parallel/CloudSpanner 5.28
44 TestAddons/parallel/LocalPath 9.75
45 TestAddons/parallel/NvidiaDevicePlugin 5.36
46 TestAddons/parallel/Yakd 6.26
97 TestFunctional/parallel/ServiceCmdConnect 603.55
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.13
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.18
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.38
128 TestFunctional/parallel/ServiceCmd/DeployApp 600.85
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.21
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.38
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
147 TestFunctional/parallel/ServiceCmd/Format 0.39
148 TestFunctional/parallel/ServiceCmd/URL 0.4
191 TestJSONOutput/pause/Command 2.28
197 TestJSONOutput/unpause/Command 1.66
282 TestPause/serial/Pause 6.75
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.43
304 TestStartStop/group/old-k8s-version/serial/Pause 6.62
310 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.55
315 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 4.21
322 TestStartStop/group/no-preload/serial/Pause 7.04
328 TestStartStop/group/embed-certs/serial/Pause 7.53
333 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.55
337 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.89
342 TestStartStop/group/newest-cni/serial/Pause 6.49
349 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.52
x
+
TestAddons/serial/Volcano (0.45s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-153780 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-153780 addons disable volcano --alsologtostderr -v=1: exit status 11 (449.336675ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:16:09.017903  298194 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:16:09.019566  298194 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:16:09.019591  298194 out.go:374] Setting ErrFile to fd 2...
	I1124 03:16:09.019599  298194 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:16:09.019900  298194 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 03:16:09.020259  298194 mustload.go:66] Loading cluster: addons-153780
	I1124 03:16:09.020691  298194 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:16:09.020712  298194 addons.go:622] checking whether the cluster is paused
	I1124 03:16:09.020829  298194 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:16:09.020846  298194 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:16:09.021386  298194 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:16:09.042043  298194 ssh_runner.go:195] Run: systemctl --version
	I1124 03:16:09.042101  298194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:16:09.060455  298194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:16:09.164968  298194 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:16:09.165064  298194 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:16:09.196490  298194 cri.go:89] found id: "3485678af5d19bae1d4411bec346a97566826570b9acca73e66fcd515e5dbfdb"
	I1124 03:16:09.196521  298194 cri.go:89] found id: "7548077813b01fb75c6d1c88996c76b7faee2fe33dde991c58f565ccb1427ad5"
	I1124 03:16:09.196526  298194 cri.go:89] found id: "8afdc0c7272e751be59501860afc29e8a7a532529fda38a20c97838336ba67b7"
	I1124 03:16:09.196530  298194 cri.go:89] found id: "29d02ce914d44a4b8329512d1a23e85804b768d884d99b075e34cf6e14dac788"
	I1124 03:16:09.196533  298194 cri.go:89] found id: "9af2707445bb441233c326d9e3bf3966ba13a26a89b7254b24679f387b0b5bc0"
	I1124 03:16:09.196537  298194 cri.go:89] found id: "f1836e8795e70fdafa512b9e9d8802dc1dd32e3cbeed9fc49f65e08646f91bfc"
	I1124 03:16:09.196540  298194 cri.go:89] found id: "bcef0ab7ff5ee5c5f49176f121fab57276a8e6c2d6f1ad04f8043b241c47d281"
	I1124 03:16:09.196543  298194 cri.go:89] found id: "87ca81f329f9944d4aa8992fd39e8df84ad075a50b258501c5b36393d9b8490f"
	I1124 03:16:09.196545  298194 cri.go:89] found id: "e055a401fe670aa2c020065d05570646e820163cf83b991f8ce8c75cac7a531b"
	I1124 03:16:09.196553  298194 cri.go:89] found id: "40df276efacb02cb1f87ef6c0cd3435fa8719cfd11c9a1fcd645347d1bbf431d"
	I1124 03:16:09.196556  298194 cri.go:89] found id: "99c706b4665c8c75b900e2aafef4cb98d48a526abe0c79cf85a0c92d29c9bc8e"
	I1124 03:16:09.196559  298194 cri.go:89] found id: "e9e5ba99ab47bcb26e5d61dc6a04f4f4259bdc62676c462b2e122037b199d6dd"
	I1124 03:16:09.196562  298194 cri.go:89] found id: "4bf7144c7e3cbfd4ce77529d316b328c7ebd75692d7023e192f903097f99167f"
	I1124 03:16:09.196566  298194 cri.go:89] found id: "e0d73582da9fb4ef7041d1e171499fdca30fab4e8fa2887ad289666b528e8c73"
	I1124 03:16:09.196569  298194 cri.go:89] found id: "d731175eced009ea8d73a26124f854679fe6a04beaf289f7a3a7635ccfa7155a"
	I1124 03:16:09.196574  298194 cri.go:89] found id: "83cadb364a123d7398e5dd2800753acc7d78782c7f5f032824ec9d6dff61065d"
	I1124 03:16:09.196581  298194 cri.go:89] found id: "122b5b3da819c62b28e08a0e8f492efb939d921f5cc87f9f5dd30d4354f4c824"
	I1124 03:16:09.196585  298194 cri.go:89] found id: "8549937bedbf1ccf4f98e083e6dcbaa5697df18822718fd0d27b79e6e76f8942"
	I1124 03:16:09.196588  298194 cri.go:89] found id: "243940847a312c0327ab19daf598258d6deca45c2a27efb8e89e875e0d87c684"
	I1124 03:16:09.196591  298194 cri.go:89] found id: "78b45913b9ad783248086bdaf8ec66bb8abed626933418187bc0d43337f720ce"
	I1124 03:16:09.196596  298194 cri.go:89] found id: "338b16a84542c2f340889c798911ea671b465c1749548d747c2fff6223b15a89"
	I1124 03:16:09.196603  298194 cri.go:89] found id: "9d6da0d20171f4ca6805881f1293dd5146dfe8c43773853626d869062a6287b6"
	I1124 03:16:09.196606  298194 cri.go:89] found id: "44d60dc8fd9beb0e354e25ab604f6137163390956f8c04bf67e790db61625fb4"
	I1124 03:16:09.196609  298194 cri.go:89] found id: ""
	I1124 03:16:09.196663  298194 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:16:09.211606  298194 out.go:203] 
	W1124 03:16:09.214595  298194 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:16:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:16:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 03:16:09.214627  298194 out.go:285] * 
	* 
	W1124 03:16:09.374607  298194 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 03:16:09.377963  298194 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-153780 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.45s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.756945ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-fhxm7" [37ea5e79-e46c-4241-ae8a-13e3a990caef] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00319029s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-v264t" [ce8f2dcd-d97d-4ae3-96f5-94cb55bf9408] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003493082s
addons_test.go:392: (dbg) Run:  kubectl --context addons-153780 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-153780 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-153780 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.692144296s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-153780 ip
2025/11/24 03:16:36 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-153780 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-153780 addons disable registry --alsologtostderr -v=1: exit status 11 (264.451856ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:16:36.611659  298711 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:16:36.612470  298711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:16:36.612522  298711 out.go:374] Setting ErrFile to fd 2...
	I1124 03:16:36.612546  298711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:16:36.612857  298711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 03:16:36.613244  298711 mustload.go:66] Loading cluster: addons-153780
	I1124 03:16:36.613716  298711 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:16:36.613759  298711 addons.go:622] checking whether the cluster is paused
	I1124 03:16:36.613904  298711 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:16:36.613936  298711 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:16:36.614538  298711 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:16:36.633006  298711 ssh_runner.go:195] Run: systemctl --version
	I1124 03:16:36.633072  298711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:16:36.650850  298711 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:16:36.757090  298711 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:16:36.757190  298711 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:16:36.788045  298711 cri.go:89] found id: "3485678af5d19bae1d4411bec346a97566826570b9acca73e66fcd515e5dbfdb"
	I1124 03:16:36.788072  298711 cri.go:89] found id: "7548077813b01fb75c6d1c88996c76b7faee2fe33dde991c58f565ccb1427ad5"
	I1124 03:16:36.788078  298711 cri.go:89] found id: "8afdc0c7272e751be59501860afc29e8a7a532529fda38a20c97838336ba67b7"
	I1124 03:16:36.788082  298711 cri.go:89] found id: "29d02ce914d44a4b8329512d1a23e85804b768d884d99b075e34cf6e14dac788"
	I1124 03:16:36.788091  298711 cri.go:89] found id: "9af2707445bb441233c326d9e3bf3966ba13a26a89b7254b24679f387b0b5bc0"
	I1124 03:16:36.788113  298711 cri.go:89] found id: "f1836e8795e70fdafa512b9e9d8802dc1dd32e3cbeed9fc49f65e08646f91bfc"
	I1124 03:16:36.788128  298711 cri.go:89] found id: "bcef0ab7ff5ee5c5f49176f121fab57276a8e6c2d6f1ad04f8043b241c47d281"
	I1124 03:16:36.788132  298711 cri.go:89] found id: "87ca81f329f9944d4aa8992fd39e8df84ad075a50b258501c5b36393d9b8490f"
	I1124 03:16:36.788135  298711 cri.go:89] found id: "e055a401fe670aa2c020065d05570646e820163cf83b991f8ce8c75cac7a531b"
	I1124 03:16:36.788141  298711 cri.go:89] found id: "40df276efacb02cb1f87ef6c0cd3435fa8719cfd11c9a1fcd645347d1bbf431d"
	I1124 03:16:36.788150  298711 cri.go:89] found id: "99c706b4665c8c75b900e2aafef4cb98d48a526abe0c79cf85a0c92d29c9bc8e"
	I1124 03:16:36.788153  298711 cri.go:89] found id: "e9e5ba99ab47bcb26e5d61dc6a04f4f4259bdc62676c462b2e122037b199d6dd"
	I1124 03:16:36.788156  298711 cri.go:89] found id: "4bf7144c7e3cbfd4ce77529d316b328c7ebd75692d7023e192f903097f99167f"
	I1124 03:16:36.788159  298711 cri.go:89] found id: "e0d73582da9fb4ef7041d1e171499fdca30fab4e8fa2887ad289666b528e8c73"
	I1124 03:16:36.788171  298711 cri.go:89] found id: "d731175eced009ea8d73a26124f854679fe6a04beaf289f7a3a7635ccfa7155a"
	I1124 03:16:36.788193  298711 cri.go:89] found id: "83cadb364a123d7398e5dd2800753acc7d78782c7f5f032824ec9d6dff61065d"
	I1124 03:16:36.788218  298711 cri.go:89] found id: "122b5b3da819c62b28e08a0e8f492efb939d921f5cc87f9f5dd30d4354f4c824"
	I1124 03:16:36.788227  298711 cri.go:89] found id: "8549937bedbf1ccf4f98e083e6dcbaa5697df18822718fd0d27b79e6e76f8942"
	I1124 03:16:36.788231  298711 cri.go:89] found id: "243940847a312c0327ab19daf598258d6deca45c2a27efb8e89e875e0d87c684"
	I1124 03:16:36.788234  298711 cri.go:89] found id: "78b45913b9ad783248086bdaf8ec66bb8abed626933418187bc0d43337f720ce"
	I1124 03:16:36.788245  298711 cri.go:89] found id: "338b16a84542c2f340889c798911ea671b465c1749548d747c2fff6223b15a89"
	I1124 03:16:36.788252  298711 cri.go:89] found id: "9d6da0d20171f4ca6805881f1293dd5146dfe8c43773853626d869062a6287b6"
	I1124 03:16:36.788259  298711 cri.go:89] found id: "44d60dc8fd9beb0e354e25ab604f6137163390956f8c04bf67e790db61625fb4"
	I1124 03:16:36.788262  298711 cri.go:89] found id: ""
	I1124 03:16:36.788341  298711 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:16:36.806401  298711 out.go:203] 
	W1124 03:16:36.809600  298711 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:16:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:16:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 03:16:36.809625  298711 out.go:285] * 
	* 
	W1124 03:16:36.815367  298711 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 03:16:36.818545  298711 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-153780 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (17.21s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.51s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.878612ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-153780
addons_test.go:332: (dbg) Run:  kubectl --context addons-153780 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-153780 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-153780 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (268.873463ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:17:17.806408  300777 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:17:17.807220  300777 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:17:17.807254  300777 out.go:374] Setting ErrFile to fd 2...
	I1124 03:17:17.807276  300777 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:17:17.807649  300777 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 03:17:17.808009  300777 mustload.go:66] Loading cluster: addons-153780
	I1124 03:17:17.808680  300777 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:17:17.808722  300777 addons.go:622] checking whether the cluster is paused
	I1124 03:17:17.808926  300777 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:17:17.808964  300777 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:17:17.809872  300777 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:17:17.829676  300777 ssh_runner.go:195] Run: systemctl --version
	I1124 03:17:17.829731  300777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:17:17.849219  300777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:17:17.953037  300777 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:17:17.953123  300777 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:17:17.987356  300777 cri.go:89] found id: "3485678af5d19bae1d4411bec346a97566826570b9acca73e66fcd515e5dbfdb"
	I1124 03:17:17.987421  300777 cri.go:89] found id: "7548077813b01fb75c6d1c88996c76b7faee2fe33dde991c58f565ccb1427ad5"
	I1124 03:17:17.987441  300777 cri.go:89] found id: "8afdc0c7272e751be59501860afc29e8a7a532529fda38a20c97838336ba67b7"
	I1124 03:17:17.987466  300777 cri.go:89] found id: "29d02ce914d44a4b8329512d1a23e85804b768d884d99b075e34cf6e14dac788"
	I1124 03:17:17.987502  300777 cri.go:89] found id: "9af2707445bb441233c326d9e3bf3966ba13a26a89b7254b24679f387b0b5bc0"
	I1124 03:17:17.987525  300777 cri.go:89] found id: "f1836e8795e70fdafa512b9e9d8802dc1dd32e3cbeed9fc49f65e08646f91bfc"
	I1124 03:17:17.987547  300777 cri.go:89] found id: "bcef0ab7ff5ee5c5f49176f121fab57276a8e6c2d6f1ad04f8043b241c47d281"
	I1124 03:17:17.987581  300777 cri.go:89] found id: "87ca81f329f9944d4aa8992fd39e8df84ad075a50b258501c5b36393d9b8490f"
	I1124 03:17:17.987601  300777 cri.go:89] found id: "e055a401fe670aa2c020065d05570646e820163cf83b991f8ce8c75cac7a531b"
	I1124 03:17:17.987627  300777 cri.go:89] found id: "40df276efacb02cb1f87ef6c0cd3435fa8719cfd11c9a1fcd645347d1bbf431d"
	I1124 03:17:17.987645  300777 cri.go:89] found id: "99c706b4665c8c75b900e2aafef4cb98d48a526abe0c79cf85a0c92d29c9bc8e"
	I1124 03:17:17.987677  300777 cri.go:89] found id: "e9e5ba99ab47bcb26e5d61dc6a04f4f4259bdc62676c462b2e122037b199d6dd"
	I1124 03:17:17.987696  300777 cri.go:89] found id: "4bf7144c7e3cbfd4ce77529d316b328c7ebd75692d7023e192f903097f99167f"
	I1124 03:17:17.987720  300777 cri.go:89] found id: "e0d73582da9fb4ef7041d1e171499fdca30fab4e8fa2887ad289666b528e8c73"
	I1124 03:17:17.987741  300777 cri.go:89] found id: "d731175eced009ea8d73a26124f854679fe6a04beaf289f7a3a7635ccfa7155a"
	I1124 03:17:17.987778  300777 cri.go:89] found id: "83cadb364a123d7398e5dd2800753acc7d78782c7f5f032824ec9d6dff61065d"
	I1124 03:17:17.987812  300777 cri.go:89] found id: "122b5b3da819c62b28e08a0e8f492efb939d921f5cc87f9f5dd30d4354f4c824"
	I1124 03:17:17.987836  300777 cri.go:89] found id: "8549937bedbf1ccf4f98e083e6dcbaa5697df18822718fd0d27b79e6e76f8942"
	I1124 03:17:17.987864  300777 cri.go:89] found id: "243940847a312c0327ab19daf598258d6deca45c2a27efb8e89e875e0d87c684"
	I1124 03:17:17.987887  300777 cri.go:89] found id: "78b45913b9ad783248086bdaf8ec66bb8abed626933418187bc0d43337f720ce"
	I1124 03:17:17.987911  300777 cri.go:89] found id: "338b16a84542c2f340889c798911ea671b465c1749548d747c2fff6223b15a89"
	I1124 03:17:17.987931  300777 cri.go:89] found id: "9d6da0d20171f4ca6805881f1293dd5146dfe8c43773853626d869062a6287b6"
	I1124 03:17:17.987963  300777 cri.go:89] found id: "44d60dc8fd9beb0e354e25ab604f6137163390956f8c04bf67e790db61625fb4"
	I1124 03:17:17.987985  300777 cri.go:89] found id: ""
	I1124 03:17:17.988072  300777 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:17:18.008276  300777 out.go:203] 
	W1124 03:17:18.011351  300777 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:17:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:17:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 03:17:18.011391  300777 out.go:285] * 
	* 
	W1124 03:17:18.017784  300777 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 03:17:18.020695  300777 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-153780 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.51s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (144.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-153780 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-153780 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-153780 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [4e09e2ee-4a00-46ac-8f0e-8df4599b2550] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [4e09e2ee-4a00-46ac-8f0e-8df4599b2550] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003141115s
I1124 03:17:22.730980  291389 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-153780 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-153780 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.502972043s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-153780 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-153780 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-153780
helpers_test.go:243: (dbg) docker inspect addons-153780:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c475d9049df5b92be1c834e60dc55fabb2e8bfcb838c569945730e8173f565a4",
	        "Created": "2025-11-24T03:13:54.24845116Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 292550,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:13:54.330497265Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fbb44bc62521f331457dff002aaa5e1e27856f9e53853b3b3ee62969be454028",
	        "ResolvConfPath": "/var/lib/docker/containers/c475d9049df5b92be1c834e60dc55fabb2e8bfcb838c569945730e8173f565a4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c475d9049df5b92be1c834e60dc55fabb2e8bfcb838c569945730e8173f565a4/hostname",
	        "HostsPath": "/var/lib/docker/containers/c475d9049df5b92be1c834e60dc55fabb2e8bfcb838c569945730e8173f565a4/hosts",
	        "LogPath": "/var/lib/docker/containers/c475d9049df5b92be1c834e60dc55fabb2e8bfcb838c569945730e8173f565a4/c475d9049df5b92be1c834e60dc55fabb2e8bfcb838c569945730e8173f565a4-json.log",
	        "Name": "/addons-153780",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-153780:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-153780",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c475d9049df5b92be1c834e60dc55fabb2e8bfcb838c569945730e8173f565a4",
	                "LowerDir": "/var/lib/docker/overlay2/4aca70ce84ed29d2d22fb2bea7d783140df107a3524b3dd95ff3f84cfb14e5e7-init/diff:/var/lib/docker/overlay2/d0aa28be488ed1454a92ae6ce1d851cbc3e880fd8a322d63788b1835a346ec13/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4aca70ce84ed29d2d22fb2bea7d783140df107a3524b3dd95ff3f84cfb14e5e7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4aca70ce84ed29d2d22fb2bea7d783140df107a3524b3dd95ff3f84cfb14e5e7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4aca70ce84ed29d2d22fb2bea7d783140df107a3524b3dd95ff3f84cfb14e5e7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-153780",
	                "Source": "/var/lib/docker/volumes/addons-153780/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-153780",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-153780",
	                "name.minikube.sigs.k8s.io": "addons-153780",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ad8f7a9eeef4c985a500846f0c83191d6fd3bc91a84be2fb79d9eed270839d12",
	            "SandboxKey": "/var/run/docker/netns/ad8f7a9eeef4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-153780": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:3c:a7:5d:18:85",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e84e42d10d667ce334546571b9e3511d266786293c95c8dc2a2fc672a60a2b37",
	                    "EndpointID": "d95674ecb10592bfb7689e8f3aa162b82325860a3fb998e0677bc272216e4a5f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-153780",
	                        "c475d9049df5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-153780 -n addons-153780
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-153780 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-153780 logs -n 25: (1.503344665s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-545793                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-545793 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ start   │ --download-only -p binary-mirror-193578 --alsologtostderr --binary-mirror http://127.0.0.1:46679 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-193578   │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ delete  │ -p binary-mirror-193578                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-193578   │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ addons  │ enable dashboard -p addons-153780                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ addons  │ disable dashboard -p addons-153780                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ start   │ -p addons-153780 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:16 UTC │
	│ addons  │ addons-153780 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:16 UTC │                     │
	│ addons  │ addons-153780 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:16 UTC │                     │
	│ addons  │ addons-153780 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:16 UTC │                     │
	│ ip      │ addons-153780 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:16 UTC │ 24 Nov 25 03:16 UTC │
	│ addons  │ addons-153780 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:16 UTC │                     │
	│ addons  │ addons-153780 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:16 UTC │                     │
	│ ssh     │ addons-153780 ssh cat /opt/local-path-provisioner/pvc-eefc238c-13a7-4139-bcbc-502e91e6b046_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:16 UTC │ 24 Nov 25 03:16 UTC │
	│ addons  │ addons-153780 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:16 UTC │                     │
	│ addons  │ addons-153780 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:16 UTC │                     │
	│ addons  │ enable headlamp -p addons-153780 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:16 UTC │                     │
	│ addons  │ addons-153780 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:17 UTC │                     │
	│ addons  │ addons-153780 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:17 UTC │                     │
	│ addons  │ addons-153780 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:17 UTC │                     │
	│ addons  │ addons-153780 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:17 UTC │                     │
	│ addons  │ addons-153780 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:17 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-153780                                                                                                                                                                                                                                                                                                                                                                                           │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:17 UTC │ 24 Nov 25 03:17 UTC │
	│ addons  │ addons-153780 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:17 UTC │                     │
	│ ssh     │ addons-153780 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:17 UTC │                     │
	│ ip      │ addons-153780 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:19 UTC │ 24 Nov 25 03:19 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:13:29
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:13:29.428779  292146 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:13:29.428910  292146 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:13:29.428919  292146 out.go:374] Setting ErrFile to fd 2...
	I1124 03:13:29.428924  292146 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:13:29.429160  292146 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 03:13:29.429589  292146 out.go:368] Setting JSON to false
	I1124 03:13:29.430386  292146 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6939,"bootTime":1763947071,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 03:13:29.430486  292146 start.go:143] virtualization:  
	I1124 03:13:29.433962  292146 out.go:179] * [addons-153780] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 03:13:29.436912  292146 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:13:29.436984  292146 notify.go:221] Checking for updates...
	I1124 03:13:29.442864  292146 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:13:29.445782  292146 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 03:13:29.448806  292146 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	I1124 03:13:29.452187  292146 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 03:13:29.455128  292146 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:13:29.458295  292146 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:13:29.492101  292146 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 03:13:29.492238  292146 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:13:29.551207  292146 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-24 03:13:29.541297467 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:13:29.551321  292146 docker.go:319] overlay module found
	I1124 03:13:29.554439  292146 out.go:179] * Using the docker driver based on user configuration
	I1124 03:13:29.557401  292146 start.go:309] selected driver: docker
	I1124 03:13:29.557425  292146 start.go:927] validating driver "docker" against <nil>
	I1124 03:13:29.557439  292146 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:13:29.558181  292146 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:13:29.618873  292146 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-24 03:13:29.610067952 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:13:29.619025  292146 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 03:13:29.619244  292146 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:13:29.622312  292146 out.go:179] * Using Docker driver with root privileges
	I1124 03:13:29.625082  292146 cni.go:84] Creating CNI manager for ""
	I1124 03:13:29.625156  292146 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:13:29.625169  292146 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 03:13:29.625257  292146 start.go:353] cluster config:
	{Name:addons-153780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-153780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1124 03:13:29.628224  292146 out.go:179] * Starting "addons-153780" primary control-plane node in "addons-153780" cluster
	I1124 03:13:29.630962  292146 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 03:13:29.633898  292146 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:13:29.636661  292146 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:13:29.636707  292146 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 03:13:29.636720  292146 cache.go:65] Caching tarball of preloaded images
	I1124 03:13:29.636730  292146 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:13:29.636805  292146 preload.go:238] Found /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 03:13:29.636816  292146 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 03:13:29.637165  292146 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/config.json ...
	I1124 03:13:29.637195  292146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/config.json: {Name:mk8d9952a307787a3248d1e4288b64c24558edda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:29.652357  292146 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 to local cache
	I1124 03:13:29.652507  292146 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local cache directory
	I1124 03:13:29.652526  292146 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local cache directory, skipping pull
	I1124 03:13:29.652531  292146 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in cache, skipping pull
	I1124 03:13:29.652538  292146 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 as a tarball
	I1124 03:13:29.652543  292146 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 from local cache
	I1124 03:13:47.786699  292146 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 from cached tarball
	I1124 03:13:47.786737  292146 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:13:47.786796  292146 start.go:360] acquireMachinesLock for addons-153780: {Name:mk35d609c14454834f274f9197604c5ae01b8f37 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:13:47.786933  292146 start.go:364] duration metric: took 113.651µs to acquireMachinesLock for "addons-153780"
	I1124 03:13:47.786965  292146 start.go:93] Provisioning new machine with config: &{Name:addons-153780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-153780 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:13:47.787035  292146 start.go:125] createHost starting for "" (driver="docker")
	I1124 03:13:47.790473  292146 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1124 03:13:47.790727  292146 start.go:159] libmachine.API.Create for "addons-153780" (driver="docker")
	I1124 03:13:47.790765  292146 client.go:173] LocalClient.Create starting
	I1124 03:13:47.790880  292146 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem
	I1124 03:13:47.870858  292146 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem
	I1124 03:13:48.102559  292146 cli_runner.go:164] Run: docker network inspect addons-153780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 03:13:48.118561  292146 cli_runner.go:211] docker network inspect addons-153780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 03:13:48.118660  292146 network_create.go:284] running [docker network inspect addons-153780] to gather additional debugging logs...
	I1124 03:13:48.118683  292146 cli_runner.go:164] Run: docker network inspect addons-153780
	W1124 03:13:48.134910  292146 cli_runner.go:211] docker network inspect addons-153780 returned with exit code 1
	I1124 03:13:48.134942  292146 network_create.go:287] error running [docker network inspect addons-153780]: docker network inspect addons-153780: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-153780 not found
	I1124 03:13:48.134957  292146 network_create.go:289] output of [docker network inspect addons-153780]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-153780 not found
	
	** /stderr **
	I1124 03:13:48.135061  292146 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:13:48.152503  292146 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019ce200}
	I1124 03:13:48.152553  292146 network_create.go:124] attempt to create docker network addons-153780 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1124 03:13:48.152609  292146 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-153780 addons-153780
	I1124 03:13:48.208248  292146 network_create.go:108] docker network addons-153780 192.168.49.0/24 created
	I1124 03:13:48.208278  292146 kic.go:121] calculated static IP "192.168.49.2" for the "addons-153780" container
	I1124 03:13:48.208353  292146 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 03:13:48.233551  292146 cli_runner.go:164] Run: docker volume create addons-153780 --label name.minikube.sigs.k8s.io=addons-153780 --label created_by.minikube.sigs.k8s.io=true
	I1124 03:13:48.252534  292146 oci.go:103] Successfully created a docker volume addons-153780
	I1124 03:13:48.252643  292146 cli_runner.go:164] Run: docker run --rm --name addons-153780-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-153780 --entrypoint /usr/bin/test -v addons-153780:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 03:13:49.711168  292146 cli_runner.go:217] Completed: docker run --rm --name addons-153780-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-153780 --entrypoint /usr/bin/test -v addons-153780:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib: (1.458489124s)
	I1124 03:13:49.711199  292146 oci.go:107] Successfully prepared a docker volume addons-153780
	I1124 03:13:49.711241  292146 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:13:49.711252  292146 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 03:13:49.711321  292146 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-153780:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 03:13:54.181763  292146 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-153780:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (4.47040478s)
	I1124 03:13:54.181806  292146 kic.go:203] duration metric: took 4.470549429s to extract preloaded images to volume ...
	W1124 03:13:54.181935  292146 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 03:13:54.182055  292146 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 03:13:54.234586  292146 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-153780 --name addons-153780 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-153780 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-153780 --network addons-153780 --ip 192.168.49.2 --volume addons-153780:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 03:13:54.539636  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Running}}
	I1124 03:13:54.558502  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:13:54.583723  292146 cli_runner.go:164] Run: docker exec addons-153780 stat /var/lib/dpkg/alternatives/iptables
	I1124 03:13:54.638383  292146 oci.go:144] the created container "addons-153780" has a running status.
	I1124 03:13:54.638412  292146 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa...
	I1124 03:13:54.871810  292146 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 03:13:54.896101  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:13:54.914672  292146 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 03:13:54.914692  292146 kic_runner.go:114] Args: [docker exec --privileged addons-153780 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 03:13:54.985133  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:13:55.021668  292146 machine.go:94] provisionDockerMachine start ...
	I1124 03:13:55.021782  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:13:55.052466  292146 main.go:143] libmachine: Using SSH client type: native
	I1124 03:13:55.052789  292146 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1124 03:13:55.052798  292146 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:13:55.053664  292146 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51594->127.0.0.1:33139: read: connection reset by peer
	I1124 03:13:58.205815  292146 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-153780
	
	I1124 03:13:58.205842  292146 ubuntu.go:182] provisioning hostname "addons-153780"
	I1124 03:13:58.205909  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:13:58.224250  292146 main.go:143] libmachine: Using SSH client type: native
	I1124 03:13:58.224573  292146 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1124 03:13:58.224588  292146 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-153780 && echo "addons-153780" | sudo tee /etc/hostname
	I1124 03:13:58.379693  292146 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-153780
	
	I1124 03:13:58.379767  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:13:58.397728  292146 main.go:143] libmachine: Using SSH client type: native
	I1124 03:13:58.398081  292146 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1124 03:13:58.398098  292146 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-153780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-153780/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-153780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:13:58.546661  292146 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:13:58.546700  292146 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-289526/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-289526/.minikube}
	I1124 03:13:58.546723  292146 ubuntu.go:190] setting up certificates
	I1124 03:13:58.546732  292146 provision.go:84] configureAuth start
	I1124 03:13:58.546792  292146 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-153780
	I1124 03:13:58.566446  292146 provision.go:143] copyHostCerts
	I1124 03:13:58.566562  292146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem (1082 bytes)
	I1124 03:13:58.566681  292146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem (1123 bytes)
	I1124 03:13:58.566732  292146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem (1675 bytes)
	I1124 03:13:58.566802  292146 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem org=jenkins.addons-153780 san=[127.0.0.1 192.168.49.2 addons-153780 localhost minikube]
	I1124 03:13:58.728647  292146 provision.go:177] copyRemoteCerts
	I1124 03:13:58.728718  292146 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:13:58.728757  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:13:58.745514  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:13:58.846114  292146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:13:58.863200  292146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:13:58.880979  292146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1124 03:13:58.898579  292146 provision.go:87] duration metric: took 351.823085ms to configureAuth
	I1124 03:13:58.898606  292146 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:13:58.898851  292146 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:13:58.898987  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:13:58.915320  292146 main.go:143] libmachine: Using SSH client type: native
	I1124 03:13:58.915626  292146 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1124 03:13:58.915643  292146 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:13:59.230496  292146 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:13:59.230570  292146 machine.go:97] duration metric: took 4.208874429s to provisionDockerMachine
	I1124 03:13:59.230605  292146 client.go:176] duration metric: took 11.439829028s to LocalClient.Create
	I1124 03:13:59.230661  292146 start.go:167] duration metric: took 11.439934843s to libmachine.API.Create "addons-153780"
	I1124 03:13:59.230688  292146 start.go:293] postStartSetup for "addons-153780" (driver="docker")
	I1124 03:13:59.230726  292146 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:13:59.230820  292146 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:13:59.230930  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:13:59.247844  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:13:59.350438  292146 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:13:59.353811  292146 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:13:59.353841  292146 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:13:59.353868  292146 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/addons for local assets ...
	I1124 03:13:59.353952  292146 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/files for local assets ...
	I1124 03:13:59.353994  292146 start.go:296] duration metric: took 123.28567ms for postStartSetup
	I1124 03:13:59.354321  292146 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-153780
	I1124 03:13:59.371765  292146 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/config.json ...
	I1124 03:13:59.372047  292146 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:13:59.372097  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:13:59.388841  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:13:59.487479  292146 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:13:59.493008  292146 start.go:128] duration metric: took 11.705958004s to createHost
	I1124 03:13:59.493038  292146 start.go:83] releasing machines lock for "addons-153780", held for 11.706089008s
	I1124 03:13:59.493120  292146 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-153780
	I1124 03:13:59.509938  292146 ssh_runner.go:195] Run: cat /version.json
	I1124 03:13:59.510000  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:13:59.510026  292146 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:13:59.510081  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:13:59.531640  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:13:59.534604  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:13:59.719986  292146 ssh_runner.go:195] Run: systemctl --version
	I1124 03:13:59.727082  292146 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:13:59.764920  292146 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:13:59.769189  292146 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:13:59.769273  292146 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:13:59.796878  292146 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 03:13:59.796913  292146 start.go:496] detecting cgroup driver to use...
	I1124 03:13:59.796946  292146 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 03:13:59.796997  292146 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:13:59.814586  292146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:13:59.827496  292146 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:13:59.827561  292146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:13:59.844852  292146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:13:59.863634  292146 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:13:59.988001  292146 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:14:00.317634  292146 docker.go:234] disabling docker service ...
	I1124 03:14:00.317742  292146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:14:00.354122  292146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:14:00.371621  292146 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:14:00.501237  292146 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:14:00.626977  292146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:14:00.640382  292146 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:14:00.654937  292146 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:14:00.655027  292146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:14:00.663955  292146 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 03:14:00.664025  292146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:14:00.673055  292146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:14:00.682220  292146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:14:00.691352  292146 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:14:00.701103  292146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:14:00.710097  292146 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:14:00.723784  292146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:14:00.732361  292146 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:14:00.739950  292146 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:14:00.747816  292146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:14:00.877425  292146 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:14:01.059664  292146 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:14:01.059761  292146 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:14:01.063869  292146 start.go:564] Will wait 60s for crictl version
	I1124 03:14:01.063994  292146 ssh_runner.go:195] Run: which crictl
	I1124 03:14:01.067723  292146 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:14:01.094048  292146 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:14:01.094224  292146 ssh_runner.go:195] Run: crio --version
	I1124 03:14:01.125950  292146 ssh_runner.go:195] Run: crio --version
	I1124 03:14:01.161168  292146 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:14:01.164074  292146 cli_runner.go:164] Run: docker network inspect addons-153780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:14:01.181261  292146 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1124 03:14:01.185815  292146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:14:01.197645  292146 kubeadm.go:884] updating cluster {Name:addons-153780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-153780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:14:01.197770  292146 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:14:01.197832  292146 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:14:01.233157  292146 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:14:01.233184  292146 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:14:01.233240  292146 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:14:01.260629  292146 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:14:01.260655  292146 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:14:01.260663  292146 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1124 03:14:01.260758  292146 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-153780 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-153780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:14:01.260844  292146 ssh_runner.go:195] Run: crio config
	I1124 03:14:01.333752  292146 cni.go:84] Creating CNI manager for ""
	I1124 03:14:01.333817  292146 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:14:01.333858  292146 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:14:01.333911  292146 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-153780 NodeName:addons-153780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:14:01.334115  292146 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-153780"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:14:01.334233  292146 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:14:01.342459  292146 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:14:01.342590  292146 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:14:01.350328  292146 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1124 03:14:01.363617  292146 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:14:01.376720  292146 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1124 03:14:01.390411  292146 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:14:01.394171  292146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:14:01.403932  292146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:14:01.522107  292146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:14:01.537430  292146 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780 for IP: 192.168.49.2
	I1124 03:14:01.537500  292146 certs.go:195] generating shared ca certs ...
	I1124 03:14:01.537532  292146 certs.go:227] acquiring lock for ca certs: {Name:mk13d1a72b5c7901cf99d88e558f62c6b8512807 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:01.537736  292146 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key
	I1124 03:14:02.493979  292146 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt ...
	I1124 03:14:02.494014  292146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt: {Name:mk226cdfc793e85d0a3112b814b9be095b5ed993 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:02.494274  292146 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key ...
	I1124 03:14:02.494291  292146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key: {Name:mkdb31c096e2ce62729da2c9c4457652a692de4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:02.494385  292146 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key
	I1124 03:14:02.641646  292146 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.crt ...
	I1124 03:14:02.641675  292146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.crt: {Name:mka4de975327d77cfeb05706ee704457ea7ab8ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:02.641846  292146 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key ...
	I1124 03:14:02.641860  292146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key: {Name:mkcb375acb16a1cfd2c844cf4167c1342ebaf3be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:02.641941  292146 certs.go:257] generating profile certs ...
	I1124 03:14:02.642009  292146 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.key
	I1124 03:14:02.642024  292146 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt with IP's: []
	I1124 03:14:02.778812  292146 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt ...
	I1124 03:14:02.778847  292146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: {Name:mk834a93ff488a7958ff2898bbc70e2dc8d763db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:02.779026  292146 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.key ...
	I1124 03:14:02.779041  292146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.key: {Name:mkd956f653538a237e5b9f5f7ab8997897f2f672 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:02.779123  292146 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/apiserver.key.6249716f
	I1124 03:14:02.779146  292146 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/apiserver.crt.6249716f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1124 03:14:03.490148  292146 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/apiserver.crt.6249716f ...
	I1124 03:14:03.490181  292146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/apiserver.crt.6249716f: {Name:mkaa045e816925e18b14c782038ccf8c377c3849 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:03.490366  292146 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/apiserver.key.6249716f ...
	I1124 03:14:03.490380  292146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/apiserver.key.6249716f: {Name:mkb10a71376e57f7735da7ed37052f88f0797d3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:03.490485  292146 certs.go:382] copying /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/apiserver.crt.6249716f -> /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/apiserver.crt
	I1124 03:14:03.490565  292146 certs.go:386] copying /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/apiserver.key.6249716f -> /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/apiserver.key
	I1124 03:14:03.490619  292146 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/proxy-client.key
	I1124 03:14:03.490640  292146 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/proxy-client.crt with IP's: []
	I1124 03:14:03.692027  292146 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/proxy-client.crt ...
	I1124 03:14:03.692061  292146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/proxy-client.crt: {Name:mk287a247827b8c2fd1687dc3f4b741f4f06a696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:03.692247  292146 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/proxy-client.key ...
	I1124 03:14:03.692263  292146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/proxy-client.key: {Name:mked341a3485b5677508e9324292afb4093d7fe6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:03.692457  292146 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:14:03.692504  292146 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:14:03.692535  292146 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:14:03.692568  292146 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem (1675 bytes)
	I1124 03:14:03.693177  292146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:14:03.711166  292146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 03:14:03.732299  292146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:14:03.751029  292146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:14:03.769202  292146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1124 03:14:03.786957  292146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:14:03.804226  292146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:14:03.822109  292146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:14:03.839976  292146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:14:03.858120  292146 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:14:03.871005  292146 ssh_runner.go:195] Run: openssl version
	I1124 03:14:03.877461  292146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:14:03.886534  292146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:14:03.891093  292146 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 03:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:14:03.891234  292146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:14:03.936237  292146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:14:03.945108  292146 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:14:03.948824  292146 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:14:03.948894  292146 kubeadm.go:401] StartCluster: {Name:addons-153780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-153780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:14:03.949000  292146 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:14:03.949070  292146 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:14:03.977538  292146 cri.go:89] found id: ""
	I1124 03:14:03.977660  292146 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:14:03.985515  292146 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:14:03.993182  292146 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:14:03.993291  292146 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:14:04.002603  292146 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:14:04.002691  292146 kubeadm.go:158] found existing configuration files:
	
	I1124 03:14:04.002780  292146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:14:04.012431  292146 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:14:04.012501  292146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:14:04.020530  292146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:14:04.028903  292146 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:14:04.028989  292146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:14:04.037195  292146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:14:04.045640  292146 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:14:04.045760  292146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:14:04.053583  292146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:14:04.061633  292146 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:14:04.061753  292146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:14:04.070029  292146 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:14:04.136691  292146 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 03:14:04.136950  292146 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 03:14:04.206136  292146 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:14:21.034026  292146 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:14:21.034087  292146 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:14:21.034176  292146 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:14:21.034235  292146 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 03:14:21.034273  292146 kubeadm.go:319] OS: Linux
	I1124 03:14:21.034321  292146 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:14:21.034373  292146 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 03:14:21.034423  292146 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:14:21.034519  292146 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:14:21.034573  292146 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:14:21.034624  292146 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:14:21.034672  292146 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:14:21.034720  292146 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:14:21.034766  292146 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 03:14:21.034838  292146 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:14:21.034931  292146 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:14:21.035021  292146 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:14:21.035083  292146 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 03:14:21.038169  292146 out.go:252]   - Generating certificates and keys ...
	I1124 03:14:21.038270  292146 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:14:21.038344  292146 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:14:21.038420  292146 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:14:21.038509  292146 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:14:21.038576  292146 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:14:21.038659  292146 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:14:21.038718  292146 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:14:21.038839  292146 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-153780 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1124 03:14:21.038896  292146 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:14:21.039015  292146 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-153780 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1124 03:14:21.039085  292146 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:14:21.039152  292146 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:14:21.039200  292146 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:14:21.039259  292146 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:14:21.039314  292146 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:14:21.039385  292146 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:14:21.039448  292146 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:14:21.039516  292146 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:14:21.039574  292146 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:14:21.039659  292146 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:14:21.039729  292146 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:14:21.042702  292146 out.go:252]   - Booting up control plane ...
	I1124 03:14:21.042906  292146 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:14:21.043004  292146 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:14:21.043077  292146 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:14:21.043211  292146 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:14:21.043352  292146 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:14:21.043475  292146 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:14:21.043632  292146 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:14:21.043691  292146 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:14:21.043874  292146 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:14:21.044012  292146 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:14:21.044083  292146 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002054542s
	I1124 03:14:21.044182  292146 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:14:21.044276  292146 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1124 03:14:21.044427  292146 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:14:21.044551  292146 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:14:21.044641  292146 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.534236431s
	I1124 03:14:21.044717  292146 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.657687708s
	I1124 03:14:21.044803  292146 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.503041608s
	I1124 03:14:21.044920  292146 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:14:21.045055  292146 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:14:21.045133  292146 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:14:21.045405  292146 kubeadm.go:319] [mark-control-plane] Marking the node addons-153780 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:14:21.045490  292146 kubeadm.go:319] [bootstrap-token] Using token: 1ng5of.h0h75cft7s8kvxk0
	I1124 03:14:21.048772  292146 out.go:252]   - Configuring RBAC rules ...
	I1124 03:14:21.048937  292146 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:14:21.049074  292146 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:14:21.049287  292146 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:14:21.049471  292146 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:14:21.049608  292146 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:14:21.049729  292146 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:14:21.049881  292146 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:14:21.049955  292146 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:14:21.050032  292146 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:14:21.050063  292146 kubeadm.go:319] 
	I1124 03:14:21.050160  292146 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:14:21.050172  292146 kubeadm.go:319] 
	I1124 03:14:21.050255  292146 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:14:21.050266  292146 kubeadm.go:319] 
	I1124 03:14:21.050293  292146 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:14:21.050377  292146 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:14:21.050440  292146 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:14:21.050584  292146 kubeadm.go:319] 
	I1124 03:14:21.050641  292146 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:14:21.050644  292146 kubeadm.go:319] 
	I1124 03:14:21.050700  292146 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:14:21.050712  292146 kubeadm.go:319] 
	I1124 03:14:21.050767  292146 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:14:21.050854  292146 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:14:21.050936  292146 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:14:21.050945  292146 kubeadm.go:319] 
	I1124 03:14:21.051030  292146 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:14:21.051115  292146 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:14:21.051123  292146 kubeadm.go:319] 
	I1124 03:14:21.051213  292146 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 1ng5of.h0h75cft7s8kvxk0 \
	I1124 03:14:21.051319  292146 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2194168f8292dc0469a2699229b86460c4906ab7e633f8753935c94ddff37855 \
	I1124 03:14:21.051360  292146 kubeadm.go:319] 	--control-plane 
	I1124 03:14:21.051370  292146 kubeadm.go:319] 
	I1124 03:14:21.051455  292146 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:14:21.051463  292146 kubeadm.go:319] 
	I1124 03:14:21.051547  292146 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 1ng5of.h0h75cft7s8kvxk0 \
	I1124 03:14:21.051695  292146 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2194168f8292dc0469a2699229b86460c4906ab7e633f8753935c94ddff37855 
	I1124 03:14:21.051713  292146 cni.go:84] Creating CNI manager for ""
	I1124 03:14:21.051724  292146 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:14:21.054847  292146 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 03:14:21.057751  292146 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:14:21.062069  292146 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:14:21.062089  292146 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:14:21.075276  292146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:14:21.367073  292146 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:14:21.367213  292146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:21.367296  292146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-153780 minikube.k8s.io/updated_at=2025_11_24T03_14_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=addons-153780 minikube.k8s.io/primary=true
	I1124 03:14:21.512955  292146 ops.go:34] apiserver oom_adj: -16
	I1124 03:14:21.513136  292146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:22.013265  292146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:22.514106  292146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:23.014107  292146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:23.513766  292146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:24.014124  292146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:24.513197  292146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:25.013867  292146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:25.513805  292146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:25.658870  292146 kubeadm.go:1114] duration metric: took 4.291710678s to wait for elevateKubeSystemPrivileges
	I1124 03:14:25.658906  292146 kubeadm.go:403] duration metric: took 21.710035602s to StartCluster
	I1124 03:14:25.658923  292146 settings.go:142] acquiring lock: {Name:mkbb4fe81234aed150705ba63a85f83232f71688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:25.659041  292146 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 03:14:25.659409  292146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/kubeconfig: {Name:mkb927d54c1c5489b1562496c31c4a11f46f8c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:25.659637  292146 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:14:25.659796  292146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:14:25.660063  292146 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:14:25.660108  292146 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1124 03:14:25.660202  292146 addons.go:70] Setting yakd=true in profile "addons-153780"
	I1124 03:14:25.660229  292146 addons.go:239] Setting addon yakd=true in "addons-153780"
	I1124 03:14:25.660262  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:25.660836  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.661302  292146 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-153780"
	I1124 03:14:25.661326  292146 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-153780"
	I1124 03:14:25.661350  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:25.661800  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.662308  292146 addons.go:70] Setting cloud-spanner=true in profile "addons-153780"
	I1124 03:14:25.662331  292146 addons.go:239] Setting addon cloud-spanner=true in "addons-153780"
	I1124 03:14:25.662354  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:25.662828  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.665413  292146 out.go:179] * Verifying Kubernetes components...
	I1124 03:14:25.666219  292146 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-153780"
	I1124 03:14:25.666275  292146 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-153780"
	I1124 03:14:25.666325  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:25.669291  292146 addons.go:70] Setting registry=true in profile "addons-153780"
	I1124 03:14:25.669318  292146 addons.go:239] Setting addon registry=true in "addons-153780"
	I1124 03:14:25.669374  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:25.669834  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.672502  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.682670  292146 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-153780"
	I1124 03:14:25.682776  292146 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-153780"
	I1124 03:14:25.682838  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:25.683354  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.683600  292146 addons.go:70] Setting registry-creds=true in profile "addons-153780"
	I1124 03:14:25.683615  292146 addons.go:239] Setting addon registry-creds=true in "addons-153780"
	I1124 03:14:25.683639  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:25.684037  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.704276  292146 addons.go:70] Setting storage-provisioner=true in profile "addons-153780"
	I1124 03:14:25.704320  292146 addons.go:239] Setting addon storage-provisioner=true in "addons-153780"
	I1124 03:14:25.704356  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:25.706059  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.706493  292146 addons.go:70] Setting default-storageclass=true in profile "addons-153780"
	I1124 03:14:25.706512  292146 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-153780"
	I1124 03:14:25.706793  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.726530  292146 addons.go:70] Setting gcp-auth=true in profile "addons-153780"
	I1124 03:14:25.726572  292146 mustload.go:66] Loading cluster: addons-153780
	I1124 03:14:25.726777  292146 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:14:25.727053  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.727340  292146 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-153780"
	I1124 03:14:25.727364  292146 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-153780"
	I1124 03:14:25.727631  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.741770  292146 addons.go:70] Setting ingress=true in profile "addons-153780"
	I1124 03:14:25.741802  292146 addons.go:239] Setting addon ingress=true in "addons-153780"
	I1124 03:14:25.741854  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:25.742356  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.744754  292146 addons.go:70] Setting volcano=true in profile "addons-153780"
	I1124 03:14:25.744782  292146 addons.go:239] Setting addon volcano=true in "addons-153780"
	I1124 03:14:25.744816  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:25.745286  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.760676  292146 addons.go:70] Setting volumesnapshots=true in profile "addons-153780"
	I1124 03:14:25.760711  292146 addons.go:239] Setting addon volumesnapshots=true in "addons-153780"
	I1124 03:14:25.760747  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:25.761235  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.781478  292146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:14:25.804374  292146 addons.go:70] Setting ingress-dns=true in profile "addons-153780"
	I1124 03:14:25.804465  292146 addons.go:239] Setting addon ingress-dns=true in "addons-153780"
	I1124 03:14:25.804540  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:25.805141  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.833649  292146 addons.go:70] Setting inspektor-gadget=true in profile "addons-153780"
	I1124 03:14:25.833733  292146 addons.go:239] Setting addon inspektor-gadget=true in "addons-153780"
	I1124 03:14:25.833788  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:25.834304  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.854198  292146 addons.go:70] Setting metrics-server=true in profile "addons-153780"
	I1124 03:14:25.854283  292146 addons.go:239] Setting addon metrics-server=true in "addons-153780"
	I1124 03:14:25.854354  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:25.855347  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.875645  292146 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1124 03:14:25.886007  292146 out.go:179]   - Using image docker.io/registry:3.0.0
	I1124 03:14:25.889966  292146 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1124 03:14:25.890027  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1124 03:14:25.890134  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:25.902175  292146 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1124 03:14:25.913362  292146 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 03:14:25.913440  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1124 03:14:25.913545  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:25.934739  292146 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1124 03:14:25.937860  292146 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1124 03:14:25.937892  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1124 03:14:25.937971  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:26.041760  292146 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1124 03:14:26.047200  292146 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 03:14:26.047230  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1124 03:14:26.047311  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:26.062235  292146 addons.go:239] Setting addon default-storageclass=true in "addons-153780"
	I1124 03:14:26.062285  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:26.066883  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:26.066979  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:26.069983  292146 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-153780"
	I1124 03:14:26.070072  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:26.072880  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:26.101589  292146 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1124 03:14:26.101595  292146 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1124 03:14:26.102226  292146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:14:26.104698  292146 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1124 03:14:26.104723  292146 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1124 03:14:26.104815  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:26.109888  292146 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1124 03:14:26.119388  292146 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 03:14:26.119412  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1124 03:14:26.119506  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	W1124 03:14:26.125083  292146 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1124 03:14:26.135667  292146 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:14:26.139121  292146 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 03:14:26.141297  292146 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:14:26.141316  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:14:26.141391  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:26.144689  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:26.155672  292146 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 03:14:26.135727  292146 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1124 03:14:26.160126  292146 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 03:14:26.160146  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1124 03:14:26.160214  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:26.163064  292146 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 03:14:26.163087  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1124 03:14:26.163151  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:26.190726  292146 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1124 03:14:26.190937  292146 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1124 03:14:26.192190  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:26.192602  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:26.196395  292146 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1124 03:14:26.197348  292146 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1124 03:14:26.197367  292146 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1124 03:14:26.197440  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:26.202364  292146 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1124 03:14:26.202588  292146 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1124 03:14:26.202603  292146 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1124 03:14:26.202677  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:26.212166  292146 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1124 03:14:26.212193  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1124 03:14:26.212252  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:26.212698  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:26.216031  292146 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1124 03:14:26.224694  292146 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1124 03:14:26.237469  292146 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1124 03:14:26.245663  292146 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1124 03:14:26.248573  292146 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1124 03:14:26.252635  292146 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1124 03:14:26.254679  292146 out.go:179]   - Using image docker.io/busybox:stable
	I1124 03:14:26.257566  292146 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 03:14:26.257589  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1124 03:14:26.257668  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:26.270513  292146 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1124 03:14:26.273718  292146 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1124 03:14:26.276647  292146 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1124 03:14:26.276680  292146 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1124 03:14:26.276747  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:26.295769  292146 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:14:26.295791  292146 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:14:26.295844  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:26.296007  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:26.317065  292146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:14:26.352619  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:26.357344  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:26.369370  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:26.377715  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:26.389956  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:26.407510  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:26.423166  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	W1124 03:14:26.429392  292146 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1124 03:14:26.429431  292146 retry.go:31] will retry after 320.525725ms: ssh: handshake failed: EOF
	I1124 03:14:26.436940  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:26.451705  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:26.456902  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:26.865829  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 03:14:26.912913  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1124 03:14:26.987802  292146 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1124 03:14:26.987826  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1124 03:14:27.000210  292146 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1124 03:14:27.000242  292146 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1124 03:14:27.017065  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 03:14:27.042647  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 03:14:27.125226  292146 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1124 03:14:27.125303  292146 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1124 03:14:27.132192  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 03:14:27.143986  292146 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1124 03:14:27.144062  292146 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1124 03:14:27.159732  292146 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1124 03:14:27.159807  292146 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1124 03:14:27.192185  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:14:27.242145  292146 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1124 03:14:27.242223  292146 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1124 03:14:27.242395  292146 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1124 03:14:27.242429  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1124 03:14:27.244686  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 03:14:27.246782  292146 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1124 03:14:27.246850  292146 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1124 03:14:27.269105  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1124 03:14:27.271105  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 03:14:27.273247  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:14:27.325778  292146 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1124 03:14:27.325852  292146 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1124 03:14:27.382639  292146 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 03:14:27.382715  292146 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1124 03:14:27.454362  292146 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1124 03:14:27.454479  292146 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1124 03:14:27.468000  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1124 03:14:27.479351  292146 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1124 03:14:27.479426  292146 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1124 03:14:27.493932  292146 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1124 03:14:27.494009  292146 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1124 03:14:27.555697  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 03:14:27.652907  292146 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1124 03:14:27.652982  292146 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1124 03:14:27.674997  292146 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1124 03:14:27.675072  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1124 03:14:27.762247  292146 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1124 03:14:27.762324  292146 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1124 03:14:27.830157  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1124 03:14:27.862016  292146 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1124 03:14:27.862094  292146 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1124 03:14:27.917175  292146 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1124 03:14:27.917257  292146 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1124 03:14:28.015244  292146 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.698102319s)
	I1124 03:14:28.015497  292146 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.913241568s)
	I1124 03:14:28.015645  292146 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1124 03:14:28.017010  292146 node_ready.go:35] waiting up to 6m0s for node "addons-153780" to be "Ready" ...
	I1124 03:14:28.040729  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.174865508s)
	I1124 03:14:28.176339  292146 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 03:14:28.176409  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1124 03:14:28.231621  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 03:14:28.290899  292146 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1124 03:14:28.290924  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1124 03:14:28.536929  292146 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-153780" context rescaled to 1 replicas
	I1124 03:14:28.687335  292146 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1124 03:14:28.687411  292146 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1124 03:14:28.949628  292146 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1124 03:14:28.949700  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1124 03:14:29.016217  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.103266144s)
	I1124 03:14:29.016343  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.999253663s)
	I1124 03:14:29.204784  292146 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1124 03:14:29.204855  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1124 03:14:29.339211  292146 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1124 03:14:29.339287  292146 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1124 03:14:29.437688  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1124 03:14:29.746059  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.613821766s)
	I1124 03:14:29.746248  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.703516167s)
	W1124 03:14:30.044317  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:14:30.407647  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.215358808s)
	I1124 03:14:31.922049  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.677285027s)
	I1124 03:14:31.922090  292146 addons.go:495] Verifying addon ingress=true in "addons-153780"
	I1124 03:14:31.922262  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.65308728s)
	I1124 03:14:31.922320  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.651153792s)
	I1124 03:14:31.922525  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.649196352s)
	I1124 03:14:31.922619  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.454549689s)
	I1124 03:14:31.922635  292146 addons.go:495] Verifying addon registry=true in "addons-153780"
	I1124 03:14:31.922700  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.36694058s)
	I1124 03:14:31.922712  292146 addons.go:495] Verifying addon metrics-server=true in "addons-153780"
	I1124 03:14:31.922750  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.092518519s)
	I1124 03:14:31.923127  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.691423266s)
	W1124 03:14:31.923317  292146 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 03:14:31.923360  292146 retry.go:31] will retry after 172.54789ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 03:14:31.926129  292146 out.go:179] * Verifying registry addon...
	I1124 03:14:31.928180  292146 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-153780 service yakd-dashboard -n yakd-dashboard
	
	I1124 03:14:31.928220  292146 out.go:179] * Verifying ingress addon...
	I1124 03:14:31.931137  292146 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1124 03:14:31.933143  292146 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1124 03:14:31.942708  292146 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1124 03:14:31.942734  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:31.943376  292146 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1124 03:14:31.943395  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 03:14:31.950207  292146 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1124 03:14:32.096087  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 03:14:32.220602  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.78281077s)
	I1124 03:14:32.220653  292146 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-153780"
	I1124 03:14:32.223999  292146 out.go:179] * Verifying csi-hostpath-driver addon...
	I1124 03:14:32.227828  292146 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1124 03:14:32.240180  292146 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1124 03:14:32.240213  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:32.435418  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:32.438707  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 03:14:32.520342  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:14:32.731138  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:32.935234  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:32.936834  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:33.231184  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:33.435652  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:33.436553  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:33.705943  292146 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1124 03:14:33.706030  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:33.724278  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:33.731870  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:33.839271  292146 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1124 03:14:33.852262  292146 addons.go:239] Setting addon gcp-auth=true in "addons-153780"
	I1124 03:14:33.852310  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:33.852755  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:33.870876  292146 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1124 03:14:33.870936  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:33.887558  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:33.934273  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:33.936863  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:34.231373  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:34.434437  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:34.436766  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 03:14:34.520796  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:14:34.731314  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:34.901693  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.80555755s)
	I1124 03:14:34.901782  292146 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.030881694s)
	I1124 03:14:34.904748  292146 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 03:14:34.908199  292146 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1124 03:14:34.910995  292146 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1124 03:14:34.911019  292146 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1124 03:14:34.924771  292146 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1124 03:14:34.924795  292146 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1124 03:14:34.935127  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:34.937956  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:34.942298  292146 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 03:14:34.942319  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1124 03:14:34.955726  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 03:14:35.232245  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:35.436229  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:35.455177  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:35.456603  292146 addons.go:495] Verifying addon gcp-auth=true in "addons-153780"
	I1124 03:14:35.459809  292146 out.go:179] * Verifying gcp-auth addon...
	I1124 03:14:35.463086  292146 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1124 03:14:35.550664  292146 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1124 03:14:35.550690  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:35.731517  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:35.934636  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:35.936899  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:35.966914  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:36.231357  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:36.434823  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:36.436932  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:36.467169  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:36.731175  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:36.934176  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:36.936156  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:36.966021  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1124 03:14:37.021258  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:14:37.231223  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:37.435143  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:37.436778  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:37.466636  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:37.731334  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:37.934426  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:37.936628  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:37.966529  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:38.230909  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:38.433925  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:38.436055  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:38.467108  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:38.730873  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:38.934997  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:38.936274  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:38.966322  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:39.231821  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:39.435587  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:39.435735  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:39.466660  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1124 03:14:39.520364  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:14:39.731737  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:39.935693  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:39.937637  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:39.966306  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:40.231825  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:40.434892  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:40.437204  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:40.467518  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:40.731247  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:40.934126  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:40.936059  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:40.965858  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:41.230506  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:41.434680  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:41.437105  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:41.466226  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:41.730963  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:41.933819  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:41.935806  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:41.966719  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1124 03:14:42.021289  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:14:42.232036  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:42.434999  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:42.435967  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:42.467166  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:42.730733  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:42.934996  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:42.937405  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:42.965945  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:43.230695  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:43.434540  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:43.436999  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:43.467323  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:43.730840  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:43.934668  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:43.936595  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:43.966673  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:44.232037  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:44.435024  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:44.437123  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:44.467662  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1124 03:14:44.520423  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:14:44.731543  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:44.935205  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:44.936625  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:44.966470  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:45.238734  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:45.435507  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:45.436033  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:45.466689  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:45.731711  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:45.936040  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:45.936178  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:45.967005  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:46.231855  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:46.435053  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:46.436865  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:46.467514  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:46.731479  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:46.935141  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:46.936675  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:46.966644  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1124 03:14:47.021105  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:14:47.231642  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:47.434824  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:47.437049  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:47.466755  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:47.730660  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:47.935463  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:47.937974  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:47.966745  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:48.231200  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:48.434516  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:48.436740  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:48.467293  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:48.731595  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:48.934854  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:48.937815  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:48.966447  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:49.232437  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:49.436764  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:49.437326  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:49.466269  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1124 03:14:49.519912  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:14:49.730799  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:49.934606  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:49.936548  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:49.967015  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:50.231142  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:50.435492  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:50.436664  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:50.466993  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:50.730833  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:50.935557  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:50.935683  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:50.967710  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:51.230880  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:51.438532  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:51.442174  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:51.465930  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1124 03:14:51.520952  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:14:51.731221  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:51.934511  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:51.936737  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:51.966588  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:52.231516  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:52.434624  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:52.437004  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:52.466419  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:52.731140  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:52.934130  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:52.936170  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:52.965978  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:53.231209  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:53.434205  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:53.436236  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:53.466243  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:53.731240  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:53.934434  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:53.936489  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:53.966405  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1124 03:14:54.020319  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:14:54.231409  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:54.435295  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:54.436397  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:54.472758  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:54.731501  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:54.934599  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:54.936616  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:54.966199  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:55.231045  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:55.433893  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:55.435964  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:55.466887  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:55.731704  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:55.934379  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:55.936395  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:55.966176  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1124 03:14:56.021057  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:14:56.231011  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:56.433905  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:56.436227  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:56.465996  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:56.730430  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:56.934265  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:56.936594  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:56.966514  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:57.231319  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:57.434350  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:57.436412  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:57.466267  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:57.730981  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:57.933931  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:57.936107  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:57.966751  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1124 03:14:58.021226  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:14:58.230987  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:58.435260  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:58.436443  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:58.466280  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:58.730964  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:58.935383  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:58.935543  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:58.966448  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:59.231103  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:59.435030  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:59.436449  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:59.466548  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:59.731431  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:59.934211  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:59.936537  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:59.966123  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:00.244054  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:00.461133  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:00.466166  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:00.469334  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1124 03:15:00.520889  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:15:00.731128  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:00.935919  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:00.936295  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:00.966688  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:01.231782  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:01.436669  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:01.436804  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:01.466897  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:01.731896  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:01.937913  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:01.937879  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:01.967011  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:02.230899  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:02.434875  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:02.436140  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:02.466949  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:02.731158  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:02.934842  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:02.936947  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:02.966496  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1124 03:15:03.020383  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:15:03.231344  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:03.434554  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:03.436885  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:03.466579  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:03.731743  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:03.935100  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:03.937303  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:03.966308  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:04.231364  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:04.434780  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:04.437046  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:04.466771  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:04.731414  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:04.934506  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:04.936525  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:04.966511  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1124 03:15:05.020630  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:15:05.230799  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:05.435121  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:05.436272  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:05.466157  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:05.731163  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:05.934110  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:05.936455  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:05.966092  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:06.231327  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:06.434646  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:06.436664  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:06.466541  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:06.731247  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:06.934627  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:06.936321  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:06.965814  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:07.243312  292146 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1124 03:15:07.243338  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:07.518345  292146 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1124 03:15:07.518370  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:07.518938  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:07.529803  292146 node_ready.go:49] node "addons-153780" is "Ready"
	I1124 03:15:07.529843  292146 node_ready.go:38] duration metric: took 39.512761217s for node "addons-153780" to be "Ready" ...
	I1124 03:15:07.529858  292146 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:15:07.529916  292146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:15:07.537946  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:07.567856  292146 api_server.go:72] duration metric: took 41.908182081s to wait for apiserver process to appear ...
	I1124 03:15:07.567885  292146 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:15:07.567906  292146 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1124 03:15:07.583208  292146 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1124 03:15:07.584380  292146 api_server.go:141] control plane version: v1.34.1
	I1124 03:15:07.584408  292146 api_server.go:131] duration metric: took 16.514848ms to wait for apiserver health ...
	I1124 03:15:07.584418  292146 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:15:07.618264  292146 system_pods.go:59] 19 kube-system pods found
	I1124 03:15:07.618308  292146 system_pods.go:61] "coredns-66bc5c9577-8cjzz" [813205d7-0fc2-43b3-b09e-fd0adc0ce6f0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:15:07.618316  292146 system_pods.go:61] "csi-hostpath-attacher-0" [8cc94983-29b2-4964-ad78-8802ebd720ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 03:15:07.618325  292146 system_pods.go:61] "csi-hostpath-resizer-0" [aa4df875-9ab4-43ce-a426-3e5b33238e8f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 03:15:07.618336  292146 system_pods.go:61] "csi-hostpathplugin-bgmwp" [7ac34006-f82d-4c20-be37-84bb40a7f088] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 03:15:07.618351  292146 system_pods.go:61] "etcd-addons-153780" [7fccdbed-b6f5-44fc-84ed-ea8536e594a2] Running
	I1124 03:15:07.618356  292146 system_pods.go:61] "kindnet-l29tl" [5e804658-7ef3-4add-9c08-6bade404f062] Running
	I1124 03:15:07.618359  292146 system_pods.go:61] "kube-apiserver-addons-153780" [33128750-fdf0-4d19-ab27-35e1085f5427] Running
	I1124 03:15:07.618363  292146 system_pods.go:61] "kube-controller-manager-addons-153780" [32b7e482-1a0b-4345-99e4-1e6ba9820fa2] Running
	I1124 03:15:07.618368  292146 system_pods.go:61] "kube-ingress-dns-minikube" [9c7f31da-69b0-403d-8b5b-d77551be5987] Pending
	I1124 03:15:07.618373  292146 system_pods.go:61] "kube-proxy-5qvwc" [223de07d-a4d6-45d0-b693-86767f12aa77] Running
	I1124 03:15:07.618379  292146 system_pods.go:61] "kube-scheduler-addons-153780" [110900a6-740b-40d4-84f5-277228f10e28] Running
	I1124 03:15:07.618386  292146 system_pods.go:61] "metrics-server-85b7d694d7-k5xvk" [9b5678eb-b6ce-4ee5-bdb6-92da24f445f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 03:15:07.618399  292146 system_pods.go:61] "nvidia-device-plugin-daemonset-j7cvq" [3405d820-d287-4751-a138-a2c64aaf6375] Pending
	I1124 03:15:07.618403  292146 system_pods.go:61] "registry-6b586f9694-fhxm7" [37ea5e79-e46c-4241-ae8a-13e3a990caef] Pending
	I1124 03:15:07.618409  292146 system_pods.go:61] "registry-creds-764b6fb674-bk79n" [dc3ac97a-2ca5-48ca-9f54-00d5127f5172] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 03:15:07.618417  292146 system_pods.go:61] "registry-proxy-v264t" [ce8f2dcd-d97d-4ae3-96f5-94cb55bf9408] Pending
	I1124 03:15:07.618424  292146 system_pods.go:61] "snapshot-controller-7d9fbc56b8-b6xbm" [59bfcec8-0051-4bc8-941f-3a818d75ef33] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 03:15:07.618431  292146 system_pods.go:61] "snapshot-controller-7d9fbc56b8-dwczj" [e4675ce7-03b5-4c7d-93f5-fea2600be8e9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 03:15:07.618441  292146 system_pods.go:61] "storage-provisioner" [40735684-1273-4c6e-a78f-2682cfbeb780] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:15:07.618466  292146 system_pods.go:74] duration metric: took 34.042194ms to wait for pod list to return data ...
	I1124 03:15:07.618476  292146 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:15:07.628519  292146 default_sa.go:45] found service account: "default"
	I1124 03:15:07.628546  292146 default_sa.go:55] duration metric: took 10.064016ms for default service account to be created ...
	I1124 03:15:07.628556  292146 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:15:07.650230  292146 system_pods.go:86] 19 kube-system pods found
	I1124 03:15:07.650264  292146 system_pods.go:89] "coredns-66bc5c9577-8cjzz" [813205d7-0fc2-43b3-b09e-fd0adc0ce6f0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:15:07.650273  292146 system_pods.go:89] "csi-hostpath-attacher-0" [8cc94983-29b2-4964-ad78-8802ebd720ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 03:15:07.650282  292146 system_pods.go:89] "csi-hostpath-resizer-0" [aa4df875-9ab4-43ce-a426-3e5b33238e8f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 03:15:07.650289  292146 system_pods.go:89] "csi-hostpathplugin-bgmwp" [7ac34006-f82d-4c20-be37-84bb40a7f088] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 03:15:07.650294  292146 system_pods.go:89] "etcd-addons-153780" [7fccdbed-b6f5-44fc-84ed-ea8536e594a2] Running
	I1124 03:15:07.650304  292146 system_pods.go:89] "kindnet-l29tl" [5e804658-7ef3-4add-9c08-6bade404f062] Running
	I1124 03:15:07.650309  292146 system_pods.go:89] "kube-apiserver-addons-153780" [33128750-fdf0-4d19-ab27-35e1085f5427] Running
	I1124 03:15:07.650315  292146 system_pods.go:89] "kube-controller-manager-addons-153780" [32b7e482-1a0b-4345-99e4-1e6ba9820fa2] Running
	I1124 03:15:07.650320  292146 system_pods.go:89] "kube-ingress-dns-minikube" [9c7f31da-69b0-403d-8b5b-d77551be5987] Pending
	I1124 03:15:07.650335  292146 system_pods.go:89] "kube-proxy-5qvwc" [223de07d-a4d6-45d0-b693-86767f12aa77] Running
	I1124 03:15:07.650339  292146 system_pods.go:89] "kube-scheduler-addons-153780" [110900a6-740b-40d4-84f5-277228f10e28] Running
	I1124 03:15:07.650345  292146 system_pods.go:89] "metrics-server-85b7d694d7-k5xvk" [9b5678eb-b6ce-4ee5-bdb6-92da24f445f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 03:15:07.650356  292146 system_pods.go:89] "nvidia-device-plugin-daemonset-j7cvq" [3405d820-d287-4751-a138-a2c64aaf6375] Pending
	I1124 03:15:07.650361  292146 system_pods.go:89] "registry-6b586f9694-fhxm7" [37ea5e79-e46c-4241-ae8a-13e3a990caef] Pending
	I1124 03:15:07.650366  292146 system_pods.go:89] "registry-creds-764b6fb674-bk79n" [dc3ac97a-2ca5-48ca-9f54-00d5127f5172] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 03:15:07.650375  292146 system_pods.go:89] "registry-proxy-v264t" [ce8f2dcd-d97d-4ae3-96f5-94cb55bf9408] Pending
	I1124 03:15:07.650382  292146 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b6xbm" [59bfcec8-0051-4bc8-941f-3a818d75ef33] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 03:15:07.650390  292146 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dwczj" [e4675ce7-03b5-4c7d-93f5-fea2600be8e9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 03:15:07.650398  292146 system_pods.go:89] "storage-provisioner" [40735684-1273-4c6e-a78f-2682cfbeb780] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:15:07.650414  292146 retry.go:31] will retry after 260.213876ms: missing components: kube-dns
	I1124 03:15:07.739423  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:07.923803  292146 system_pods.go:86] 19 kube-system pods found
	I1124 03:15:07.923841  292146 system_pods.go:89] "coredns-66bc5c9577-8cjzz" [813205d7-0fc2-43b3-b09e-fd0adc0ce6f0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:15:07.923852  292146 system_pods.go:89] "csi-hostpath-attacher-0" [8cc94983-29b2-4964-ad78-8802ebd720ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 03:15:07.923860  292146 system_pods.go:89] "csi-hostpath-resizer-0" [aa4df875-9ab4-43ce-a426-3e5b33238e8f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 03:15:07.923868  292146 system_pods.go:89] "csi-hostpathplugin-bgmwp" [7ac34006-f82d-4c20-be37-84bb40a7f088] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 03:15:07.923877  292146 system_pods.go:89] "etcd-addons-153780" [7fccdbed-b6f5-44fc-84ed-ea8536e594a2] Running
	I1124 03:15:07.923882  292146 system_pods.go:89] "kindnet-l29tl" [5e804658-7ef3-4add-9c08-6bade404f062] Running
	I1124 03:15:07.923890  292146 system_pods.go:89] "kube-apiserver-addons-153780" [33128750-fdf0-4d19-ab27-35e1085f5427] Running
	I1124 03:15:07.923894  292146 system_pods.go:89] "kube-controller-manager-addons-153780" [32b7e482-1a0b-4345-99e4-1e6ba9820fa2] Running
	I1124 03:15:07.923908  292146 system_pods.go:89] "kube-ingress-dns-minikube" [9c7f31da-69b0-403d-8b5b-d77551be5987] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 03:15:07.923912  292146 system_pods.go:89] "kube-proxy-5qvwc" [223de07d-a4d6-45d0-b693-86767f12aa77] Running
	I1124 03:15:07.923923  292146 system_pods.go:89] "kube-scheduler-addons-153780" [110900a6-740b-40d4-84f5-277228f10e28] Running
	I1124 03:15:07.923930  292146 system_pods.go:89] "metrics-server-85b7d694d7-k5xvk" [9b5678eb-b6ce-4ee5-bdb6-92da24f445f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 03:15:07.923937  292146 system_pods.go:89] "nvidia-device-plugin-daemonset-j7cvq" [3405d820-d287-4751-a138-a2c64aaf6375] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 03:15:07.923947  292146 system_pods.go:89] "registry-6b586f9694-fhxm7" [37ea5e79-e46c-4241-ae8a-13e3a990caef] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 03:15:07.923952  292146 system_pods.go:89] "registry-creds-764b6fb674-bk79n" [dc3ac97a-2ca5-48ca-9f54-00d5127f5172] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 03:15:07.923969  292146 system_pods.go:89] "registry-proxy-v264t" [ce8f2dcd-d97d-4ae3-96f5-94cb55bf9408] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 03:15:07.923981  292146 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b6xbm" [59bfcec8-0051-4bc8-941f-3a818d75ef33] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 03:15:07.923987  292146 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dwczj" [e4675ce7-03b5-4c7d-93f5-fea2600be8e9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 03:15:07.923997  292146 system_pods.go:89] "storage-provisioner" [40735684-1273-4c6e-a78f-2682cfbeb780] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:15:07.924016  292146 retry.go:31] will retry after 352.354756ms: missing components: kube-dns
	I1124 03:15:08.039063  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:08.040925  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:08.041092  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:08.236484  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:08.355371  292146 system_pods.go:86] 19 kube-system pods found
	I1124 03:15:08.355413  292146 system_pods.go:89] "coredns-66bc5c9577-8cjzz" [813205d7-0fc2-43b3-b09e-fd0adc0ce6f0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:15:08.355425  292146 system_pods.go:89] "csi-hostpath-attacher-0" [8cc94983-29b2-4964-ad78-8802ebd720ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 03:15:08.355432  292146 system_pods.go:89] "csi-hostpath-resizer-0" [aa4df875-9ab4-43ce-a426-3e5b33238e8f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 03:15:08.355441  292146 system_pods.go:89] "csi-hostpathplugin-bgmwp" [7ac34006-f82d-4c20-be37-84bb40a7f088] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 03:15:08.355455  292146 system_pods.go:89] "etcd-addons-153780" [7fccdbed-b6f5-44fc-84ed-ea8536e594a2] Running
	I1124 03:15:08.355461  292146 system_pods.go:89] "kindnet-l29tl" [5e804658-7ef3-4add-9c08-6bade404f062] Running
	I1124 03:15:08.355473  292146 system_pods.go:89] "kube-apiserver-addons-153780" [33128750-fdf0-4d19-ab27-35e1085f5427] Running
	I1124 03:15:08.355483  292146 system_pods.go:89] "kube-controller-manager-addons-153780" [32b7e482-1a0b-4345-99e4-1e6ba9820fa2] Running
	I1124 03:15:08.355489  292146 system_pods.go:89] "kube-ingress-dns-minikube" [9c7f31da-69b0-403d-8b5b-d77551be5987] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 03:15:08.355493  292146 system_pods.go:89] "kube-proxy-5qvwc" [223de07d-a4d6-45d0-b693-86767f12aa77] Running
	I1124 03:15:08.355503  292146 system_pods.go:89] "kube-scheduler-addons-153780" [110900a6-740b-40d4-84f5-277228f10e28] Running
	I1124 03:15:08.355509  292146 system_pods.go:89] "metrics-server-85b7d694d7-k5xvk" [9b5678eb-b6ce-4ee5-bdb6-92da24f445f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 03:15:08.355515  292146 system_pods.go:89] "nvidia-device-plugin-daemonset-j7cvq" [3405d820-d287-4751-a138-a2c64aaf6375] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 03:15:08.355526  292146 system_pods.go:89] "registry-6b586f9694-fhxm7" [37ea5e79-e46c-4241-ae8a-13e3a990caef] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 03:15:08.355536  292146 system_pods.go:89] "registry-creds-764b6fb674-bk79n" [dc3ac97a-2ca5-48ca-9f54-00d5127f5172] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 03:15:08.355544  292146 system_pods.go:89] "registry-proxy-v264t" [ce8f2dcd-d97d-4ae3-96f5-94cb55bf9408] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 03:15:08.355553  292146 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b6xbm" [59bfcec8-0051-4bc8-941f-3a818d75ef33] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 03:15:08.355565  292146 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dwczj" [e4675ce7-03b5-4c7d-93f5-fea2600be8e9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 03:15:08.355576  292146 system_pods.go:89] "storage-provisioner" [40735684-1273-4c6e-a78f-2682cfbeb780] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:15:08.355591  292146 retry.go:31] will retry after 345.748341ms: missing components: kube-dns
	I1124 03:15:08.444567  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:08.445033  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:08.467009  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:08.706917  292146 system_pods.go:86] 19 kube-system pods found
	I1124 03:15:08.706956  292146 system_pods.go:89] "coredns-66bc5c9577-8cjzz" [813205d7-0fc2-43b3-b09e-fd0adc0ce6f0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:15:08.706965  292146 system_pods.go:89] "csi-hostpath-attacher-0" [8cc94983-29b2-4964-ad78-8802ebd720ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 03:15:08.706971  292146 system_pods.go:89] "csi-hostpath-resizer-0" [aa4df875-9ab4-43ce-a426-3e5b33238e8f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 03:15:08.706978  292146 system_pods.go:89] "csi-hostpathplugin-bgmwp" [7ac34006-f82d-4c20-be37-84bb40a7f088] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 03:15:08.706987  292146 system_pods.go:89] "etcd-addons-153780" [7fccdbed-b6f5-44fc-84ed-ea8536e594a2] Running
	I1124 03:15:08.706992  292146 system_pods.go:89] "kindnet-l29tl" [5e804658-7ef3-4add-9c08-6bade404f062] Running
	I1124 03:15:08.707002  292146 system_pods.go:89] "kube-apiserver-addons-153780" [33128750-fdf0-4d19-ab27-35e1085f5427] Running
	I1124 03:15:08.707006  292146 system_pods.go:89] "kube-controller-manager-addons-153780" [32b7e482-1a0b-4345-99e4-1e6ba9820fa2] Running
	I1124 03:15:08.707014  292146 system_pods.go:89] "kube-ingress-dns-minikube" [9c7f31da-69b0-403d-8b5b-d77551be5987] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 03:15:08.707021  292146 system_pods.go:89] "kube-proxy-5qvwc" [223de07d-a4d6-45d0-b693-86767f12aa77] Running
	I1124 03:15:08.707025  292146 system_pods.go:89] "kube-scheduler-addons-153780" [110900a6-740b-40d4-84f5-277228f10e28] Running
	I1124 03:15:08.707033  292146 system_pods.go:89] "metrics-server-85b7d694d7-k5xvk" [9b5678eb-b6ce-4ee5-bdb6-92da24f445f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 03:15:08.707043  292146 system_pods.go:89] "nvidia-device-plugin-daemonset-j7cvq" [3405d820-d287-4751-a138-a2c64aaf6375] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 03:15:08.707049  292146 system_pods.go:89] "registry-6b586f9694-fhxm7" [37ea5e79-e46c-4241-ae8a-13e3a990caef] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 03:15:08.707059  292146 system_pods.go:89] "registry-creds-764b6fb674-bk79n" [dc3ac97a-2ca5-48ca-9f54-00d5127f5172] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 03:15:08.707068  292146 system_pods.go:89] "registry-proxy-v264t" [ce8f2dcd-d97d-4ae3-96f5-94cb55bf9408] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 03:15:08.707074  292146 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b6xbm" [59bfcec8-0051-4bc8-941f-3a818d75ef33] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 03:15:08.707083  292146 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dwczj" [e4675ce7-03b5-4c7d-93f5-fea2600be8e9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 03:15:08.707089  292146 system_pods.go:89] "storage-provisioner" [40735684-1273-4c6e-a78f-2682cfbeb780] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:15:08.707105  292146 retry.go:31] will retry after 475.906663ms: missing components: kube-dns
	I1124 03:15:08.733325  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:08.941475  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:08.941804  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:08.967311  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:09.188987  292146 system_pods.go:86] 19 kube-system pods found
	I1124 03:15:09.189016  292146 system_pods.go:89] "coredns-66bc5c9577-8cjzz" [813205d7-0fc2-43b3-b09e-fd0adc0ce6f0] Running
	I1124 03:15:09.189026  292146 system_pods.go:89] "csi-hostpath-attacher-0" [8cc94983-29b2-4964-ad78-8802ebd720ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 03:15:09.189036  292146 system_pods.go:89] "csi-hostpath-resizer-0" [aa4df875-9ab4-43ce-a426-3e5b33238e8f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 03:15:09.189043  292146 system_pods.go:89] "csi-hostpathplugin-bgmwp" [7ac34006-f82d-4c20-be37-84bb40a7f088] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 03:15:09.189051  292146 system_pods.go:89] "etcd-addons-153780" [7fccdbed-b6f5-44fc-84ed-ea8536e594a2] Running
	I1124 03:15:09.189056  292146 system_pods.go:89] "kindnet-l29tl" [5e804658-7ef3-4add-9c08-6bade404f062] Running
	I1124 03:15:09.189069  292146 system_pods.go:89] "kube-apiserver-addons-153780" [33128750-fdf0-4d19-ab27-35e1085f5427] Running
	I1124 03:15:09.189074  292146 system_pods.go:89] "kube-controller-manager-addons-153780" [32b7e482-1a0b-4345-99e4-1e6ba9820fa2] Running
	I1124 03:15:09.189079  292146 system_pods.go:89] "kube-ingress-dns-minikube" [9c7f31da-69b0-403d-8b5b-d77551be5987] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 03:15:09.189083  292146 system_pods.go:89] "kube-proxy-5qvwc" [223de07d-a4d6-45d0-b693-86767f12aa77] Running
	I1124 03:15:09.189087  292146 system_pods.go:89] "kube-scheduler-addons-153780" [110900a6-740b-40d4-84f5-277228f10e28] Running
	I1124 03:15:09.189099  292146 system_pods.go:89] "metrics-server-85b7d694d7-k5xvk" [9b5678eb-b6ce-4ee5-bdb6-92da24f445f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 03:15:09.189108  292146 system_pods.go:89] "nvidia-device-plugin-daemonset-j7cvq" [3405d820-d287-4751-a138-a2c64aaf6375] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 03:15:09.189122  292146 system_pods.go:89] "registry-6b586f9694-fhxm7" [37ea5e79-e46c-4241-ae8a-13e3a990caef] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 03:15:09.189129  292146 system_pods.go:89] "registry-creds-764b6fb674-bk79n" [dc3ac97a-2ca5-48ca-9f54-00d5127f5172] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 03:15:09.189135  292146 system_pods.go:89] "registry-proxy-v264t" [ce8f2dcd-d97d-4ae3-96f5-94cb55bf9408] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 03:15:09.189142  292146 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b6xbm" [59bfcec8-0051-4bc8-941f-3a818d75ef33] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 03:15:09.189152  292146 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dwczj" [e4675ce7-03b5-4c7d-93f5-fea2600be8e9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 03:15:09.189158  292146 system_pods.go:89] "storage-provisioner" [40735684-1273-4c6e-a78f-2682cfbeb780] Running
	I1124 03:15:09.189167  292146 system_pods.go:126] duration metric: took 1.560604954s to wait for k8s-apps to be running ...
	I1124 03:15:09.189179  292146 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:15:09.189235  292146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:15:09.203834  292146 system_svc.go:56] duration metric: took 14.646576ms WaitForService to wait for kubelet
	I1124 03:15:09.203865  292146 kubeadm.go:587] duration metric: took 43.544194887s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:15:09.203883  292146 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:15:09.207492  292146 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 03:15:09.207528  292146 node_conditions.go:123] node cpu capacity is 2
	I1124 03:15:09.207542  292146 node_conditions.go:105] duration metric: took 3.652395ms to run NodePressure ...
	I1124 03:15:09.207554  292146 start.go:242] waiting for startup goroutines ...
	I1124 03:15:09.232737  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:09.438204  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:09.438447  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:09.537784  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:09.731758  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:09.939208  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:09.940540  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:09.969770  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:10.233510  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:10.439862  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:10.440144  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:10.466243  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:10.735602  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:10.943809  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:10.944224  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:11.040489  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:11.238603  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:11.440297  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:11.440835  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:11.471613  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:11.731496  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:11.953449  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:11.953475  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:11.979370  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:12.232223  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:12.435214  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:12.437625  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:12.466768  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:12.731576  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:12.936643  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:12.936744  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:12.967126  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:13.231922  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:13.434754  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:13.437431  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:13.466076  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:13.731960  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:13.937209  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:13.937629  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:13.967169  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:14.231520  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:14.441655  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:14.444049  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:14.466148  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:14.732291  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:14.935672  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:14.939116  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:14.967147  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:15.232378  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:15.435322  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:15.436746  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:15.467213  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:15.731956  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:15.937182  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:15.937724  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:15.966657  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:16.231956  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:16.436613  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:16.436728  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:16.466888  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:16.731482  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:16.934907  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:16.937693  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:16.967003  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:17.232818  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:17.435214  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:17.438102  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:17.466548  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:17.732240  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:17.936496  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:17.937304  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:17.967131  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:18.232156  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:18.435191  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:18.438344  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:18.471013  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:18.734591  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:18.937745  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:18.937935  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:18.967770  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:19.237008  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:19.440988  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:19.441766  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:19.542318  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:19.731868  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:19.936020  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:19.937841  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:19.967148  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:20.232087  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:20.434832  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:20.436529  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:20.467142  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:20.731755  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:20.937479  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:20.938382  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:20.967244  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:21.231912  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:21.433916  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:21.436346  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:21.466811  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:21.731750  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:21.942976  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:21.943510  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:21.967120  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:22.231301  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:22.434879  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:22.437019  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:22.466741  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:22.731595  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:22.953170  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:22.953545  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:23.047744  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:23.232151  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:23.434845  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:23.437980  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:23.467165  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:23.732158  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:23.934918  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:23.937929  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:23.967048  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:24.232056  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:24.452480  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:24.452937  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:24.469561  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:24.731560  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:24.940368  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:24.942409  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:24.969429  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:25.231939  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:25.434681  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:25.436496  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:25.466319  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:25.731721  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:25.935088  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:25.937105  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:25.966994  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:26.231325  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:26.437443  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:26.438177  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:26.466283  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:26.732602  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:26.937400  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:26.937738  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:26.967419  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:27.232218  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:27.434839  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:27.437878  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:27.467285  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:27.731901  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:27.937166  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:27.937537  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:27.966833  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:28.231965  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:28.438977  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:28.438988  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:28.467028  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:28.732352  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:28.937052  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:28.938285  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:28.966652  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:29.231514  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:29.436685  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:29.437269  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:29.466485  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:29.732344  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:29.939010  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:29.939169  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:29.966253  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:30.231668  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:30.437545  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:30.437929  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:30.466991  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:30.731753  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:30.937206  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:30.937717  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:30.967089  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:31.232801  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:31.434944  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:31.436671  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:31.469319  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:31.732201  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:31.941078  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:31.941467  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:31.966902  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:32.231673  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:32.435628  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:32.437027  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:32.466557  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:32.731602  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:32.947124  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:32.947286  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:32.971948  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:33.231263  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:33.439558  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:33.442345  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:33.466343  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:33.732310  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:33.936289  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:33.937928  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:33.967044  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:34.232016  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:34.438064  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:34.438621  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:34.466884  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:34.732066  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:34.934241  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:34.936376  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:34.966391  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:35.232549  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:35.435687  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:35.436951  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:35.467113  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:35.732305  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:35.934567  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:35.937067  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:35.967253  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:36.231906  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:36.437639  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:36.437894  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:36.466796  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:36.732545  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:36.940181  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:36.940832  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:37.039016  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:37.230819  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:37.436887  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:37.437166  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:37.466282  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:37.731423  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:37.937589  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:37.941253  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:37.979479  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:38.232552  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:38.435784  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:38.436115  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:38.465930  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:38.731262  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:38.935547  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:38.936684  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:38.970142  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:39.231618  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:39.436978  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:39.437735  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:39.466533  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:39.731923  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:39.937241  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:39.937596  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:39.966464  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:40.232549  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:40.435086  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:40.437956  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:40.466922  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:40.733023  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:40.936435  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:40.938391  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:40.966735  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:41.232078  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:41.434710  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:41.437183  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:41.466337  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:41.731926  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:41.942141  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:41.946672  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:42.007786  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:42.234649  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:42.435980  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:42.436430  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:42.466291  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:42.732597  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:42.937223  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:42.937391  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:42.968450  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:43.232434  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:43.434796  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:43.437243  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:43.466160  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:43.731739  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:43.934878  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:43.936714  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:43.966529  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:44.232022  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:44.436221  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:44.436410  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:44.465981  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:44.731728  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:44.934524  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:44.937108  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:44.966166  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:45.238362  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:45.434190  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:45.436821  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:45.466355  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:45.732038  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:45.936184  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:45.937003  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:45.966965  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:46.232119  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:46.435841  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:46.437457  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:46.466777  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:46.734990  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:46.934983  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:46.937236  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:46.966140  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:47.232276  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:47.434644  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:47.437202  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:47.466862  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:47.731469  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:47.934429  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:47.936343  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:47.966491  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:48.233596  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:48.436104  292146 kapi.go:107] duration metric: took 1m16.504968863s to wait for kubernetes.io/minikube-addons=registry ...
	I1124 03:15:48.436272  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:48.466520  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:48.731360  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:48.937272  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:48.965910  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:49.232172  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:49.436712  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:49.466652  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:49.732282  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:49.937042  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:49.967061  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:50.232933  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:50.437886  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:50.467831  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:50.731789  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:50.936762  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:50.966596  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:51.232471  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:51.436780  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:51.466547  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:51.731308  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:51.936458  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:51.966601  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:52.232170  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:52.437313  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:52.473242  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:52.741467  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:52.936929  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:52.966790  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:53.232095  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:53.436842  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:53.467441  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:53.732404  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:53.936407  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:53.966273  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:54.232422  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:54.439186  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:54.467199  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:54.733138  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:54.937452  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:54.966569  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:55.236180  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:55.438925  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:55.467385  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:55.732706  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:55.937423  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:55.966531  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:56.232287  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:56.436993  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:56.466950  292146 kapi.go:107] duration metric: took 1m21.003863869s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1124 03:15:56.470257  292146 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-153780 cluster.
	I1124 03:15:56.473466  292146 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1124 03:15:56.476745  292146 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1124 03:15:56.733265  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:56.937309  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:57.231808  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:57.437071  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:57.731686  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:57.937263  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:58.231651  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:58.436518  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:58.731236  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:58.937678  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:59.231337  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:59.437910  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:59.731555  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:59.941276  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:16:00.286409  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:16:00.439665  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:16:00.731995  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:16:00.938114  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:16:01.232190  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:16:01.436336  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:16:01.731379  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:16:01.937644  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:16:02.235904  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:16:02.436885  292146 kapi.go:107] duration metric: took 1m30.503736927s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1124 03:16:02.731912  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:16:03.237180  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:16:03.732213  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:16:04.232197  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:16:04.732171  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:16:05.231142  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:16:05.732008  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:16:06.233864  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:16:06.732015  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:16:07.238005  292146 kapi.go:107] duration metric: took 1m35.010177979s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1124 03:16:07.241115  292146 out.go:179] * Enabled addons: nvidia-device-plugin, cloud-spanner, registry-creds, amd-gpu-device-plugin, ingress-dns, storage-provisioner, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1124 03:16:07.244095  292146 addons.go:530] duration metric: took 1m41.583980373s for enable addons: enabled=[nvidia-device-plugin cloud-spanner registry-creds amd-gpu-device-plugin ingress-dns storage-provisioner inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1124 03:16:07.244157  292146 start.go:247] waiting for cluster config update ...
	I1124 03:16:07.244213  292146 start.go:256] writing updated cluster config ...
	I1124 03:16:07.244512  292146 ssh_runner.go:195] Run: rm -f paused
	I1124 03:16:07.253548  292146 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:16:07.274177  292146 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8cjzz" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:16:07.289938  292146 pod_ready.go:94] pod "coredns-66bc5c9577-8cjzz" is "Ready"
	I1124 03:16:07.289962  292146 pod_ready.go:86] duration metric: took 15.755312ms for pod "coredns-66bc5c9577-8cjzz" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:16:07.292995  292146 pod_ready.go:83] waiting for pod "etcd-addons-153780" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:16:07.300259  292146 pod_ready.go:94] pod "etcd-addons-153780" is "Ready"
	I1124 03:16:07.300288  292146 pod_ready.go:86] duration metric: took 7.263485ms for pod "etcd-addons-153780" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:16:07.314983  292146 pod_ready.go:83] waiting for pod "kube-apiserver-addons-153780" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:16:07.324250  292146 pod_ready.go:94] pod "kube-apiserver-addons-153780" is "Ready"
	I1124 03:16:07.324276  292146 pod_ready.go:86] duration metric: took 9.265282ms for pod "kube-apiserver-addons-153780" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:16:07.329050  292146 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-153780" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:16:07.658002  292146 pod_ready.go:94] pod "kube-controller-manager-addons-153780" is "Ready"
	I1124 03:16:07.658034  292146 pod_ready.go:86] duration metric: took 328.954925ms for pod "kube-controller-manager-addons-153780" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:16:07.858570  292146 pod_ready.go:83] waiting for pod "kube-proxy-5qvwc" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:16:08.258030  292146 pod_ready.go:94] pod "kube-proxy-5qvwc" is "Ready"
	I1124 03:16:08.258065  292146 pod_ready.go:86] duration metric: took 399.466171ms for pod "kube-proxy-5qvwc" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:16:08.458608  292146 pod_ready.go:83] waiting for pod "kube-scheduler-addons-153780" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:16:08.857956  292146 pod_ready.go:94] pod "kube-scheduler-addons-153780" is "Ready"
	I1124 03:16:08.857988  292146 pod_ready.go:86] duration metric: took 399.349403ms for pod "kube-scheduler-addons-153780" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:16:08.858003  292146 pod_ready.go:40] duration metric: took 1.604414917s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:16:08.914902  292146 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 03:16:08.918167  292146 out.go:179] * Done! kubectl is now configured to use "addons-153780" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 03:19:20 addons-153780 crio[830]: time="2025-11-24T03:19:20.012533431Z" level=info msg="Removed container 3dee1314f0b972557805eca0b502a0a5cbb627074300dbac593d4e17aaa4fff8: kube-system/registry-creds-764b6fb674-bk79n/registry-creds" id=e56ae37c-277d-4842-97cd-9edb258a032a name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 03:19:33 addons-153780 crio[830]: time="2025-11-24T03:19:33.791073799Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-ggg4f/POD" id=596182ae-1572-4d53-9d50-af134feb2674 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 03:19:33 addons-153780 crio[830]: time="2025-11-24T03:19:33.791143519Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:19:33 addons-153780 crio[830]: time="2025-11-24T03:19:33.798258212Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-ggg4f Namespace:default ID:495e56062811964d302e4fb1f4d318d3f798f9bc083212b0682a13bfba0cfc4c UID:0ddc73ea-5e6f-45e4-972f-433cfa259e2f NetNS:/var/run/netns/0d3b4c70-2cb5-4806-88fa-7dd7dd3eb571 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40027814f8}] Aliases:map[]}"
	Nov 24 03:19:33 addons-153780 crio[830]: time="2025-11-24T03:19:33.798445365Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-ggg4f to CNI network \"kindnet\" (type=ptp)"
	Nov 24 03:19:33 addons-153780 crio[830]: time="2025-11-24T03:19:33.813680365Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-ggg4f Namespace:default ID:495e56062811964d302e4fb1f4d318d3f798f9bc083212b0682a13bfba0cfc4c UID:0ddc73ea-5e6f-45e4-972f-433cfa259e2f NetNS:/var/run/netns/0d3b4c70-2cb5-4806-88fa-7dd7dd3eb571 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40027814f8}] Aliases:map[]}"
	Nov 24 03:19:33 addons-153780 crio[830]: time="2025-11-24T03:19:33.813842614Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-ggg4f for CNI network kindnet (type=ptp)"
	Nov 24 03:19:33 addons-153780 crio[830]: time="2025-11-24T03:19:33.817697309Z" level=info msg="Ran pod sandbox 495e56062811964d302e4fb1f4d318d3f798f9bc083212b0682a13bfba0cfc4c with infra container: default/hello-world-app-5d498dc89-ggg4f/POD" id=596182ae-1572-4d53-9d50-af134feb2674 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 03:19:33 addons-153780 crio[830]: time="2025-11-24T03:19:33.818959662Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=02b3866d-83dc-4a91-94e9-855941e76b44 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:19:33 addons-153780 crio[830]: time="2025-11-24T03:19:33.819072114Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=02b3866d-83dc-4a91-94e9-855941e76b44 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:19:33 addons-153780 crio[830]: time="2025-11-24T03:19:33.819106412Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=02b3866d-83dc-4a91-94e9-855941e76b44 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:19:33 addons-153780 crio[830]: time="2025-11-24T03:19:33.82137812Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=276c46ae-ba02-41fb-833c-088a4f88b2d9 name=/runtime.v1.ImageService/PullImage
	Nov 24 03:19:33 addons-153780 crio[830]: time="2025-11-24T03:19:33.823256059Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 24 03:19:34 addons-153780 crio[830]: time="2025-11-24T03:19:34.427367675Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=276c46ae-ba02-41fb-833c-088a4f88b2d9 name=/runtime.v1.ImageService/PullImage
	Nov 24 03:19:34 addons-153780 crio[830]: time="2025-11-24T03:19:34.42938786Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=85459e80-98b8-46cd-9276-5c78d0630e4b name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:19:34 addons-153780 crio[830]: time="2025-11-24T03:19:34.433226104Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=ad76f43f-9e2f-4f42-9388-1a26977e531f name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:19:34 addons-153780 crio[830]: time="2025-11-24T03:19:34.450605663Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-ggg4f/hello-world-app" id=fdfafd32-fae1-4b13-94df-a84a3cc66842 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:19:34 addons-153780 crio[830]: time="2025-11-24T03:19:34.4507746Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:19:34 addons-153780 crio[830]: time="2025-11-24T03:19:34.478019949Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:19:34 addons-153780 crio[830]: time="2025-11-24T03:19:34.478218556Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8f1e6081ce4c5a5d0e836f382cf35eb79d966c0b2ee9a7098184fef6c0271b7e/merged/etc/passwd: no such file or directory"
	Nov 24 03:19:34 addons-153780 crio[830]: time="2025-11-24T03:19:34.478240439Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8f1e6081ce4c5a5d0e836f382cf35eb79d966c0b2ee9a7098184fef6c0271b7e/merged/etc/group: no such file or directory"
	Nov 24 03:19:34 addons-153780 crio[830]: time="2025-11-24T03:19:34.479552762Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:19:34 addons-153780 crio[830]: time="2025-11-24T03:19:34.504539984Z" level=info msg="Created container 63fdd00cfba614f8162b91f50486c3c26af8d9fd90da6944f637c950554c59e0: default/hello-world-app-5d498dc89-ggg4f/hello-world-app" id=fdfafd32-fae1-4b13-94df-a84a3cc66842 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:19:34 addons-153780 crio[830]: time="2025-11-24T03:19:34.506802788Z" level=info msg="Starting container: 63fdd00cfba614f8162b91f50486c3c26af8d9fd90da6944f637c950554c59e0" id=83893f0f-3fe7-4743-ac71-e250bc6d23bf name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:19:34 addons-153780 crio[830]: time="2025-11-24T03:19:34.511870737Z" level=info msg="Started container" PID=7453 containerID=63fdd00cfba614f8162b91f50486c3c26af8d9fd90da6944f637c950554c59e0 description=default/hello-world-app-5d498dc89-ggg4f/hello-world-app id=83893f0f-3fe7-4743-ac71-e250bc6d23bf name=/runtime.v1.RuntimeService/StartContainer sandboxID=495e56062811964d302e4fb1f4d318d3f798f9bc083212b0682a13bfba0cfc4c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	63fdd00cfba61       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   495e560628119       hello-world-app-5d498dc89-ggg4f            default
	e96c9cadc4eb7       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             16 seconds ago           Exited              registry-creds                           1                   02e9cadc52ef5       registry-creds-764b6fb674-bk79n            kube-system
	b9291bb9ab04b       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago            Running             nginx                                    0                   caf57968223c8       nginx                                      default
	80b5c6cab402e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   ed2481873fecd       busybox                                    default
	3485678af5d19       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   bb3baae21d174       csi-hostpathplugin-bgmwp                   kube-system
	7548077813b01       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   bb3baae21d174       csi-hostpathplugin-bgmwp                   kube-system
	8afdc0c7272e7       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   bb3baae21d174       csi-hostpathplugin-bgmwp                   kube-system
	29d02ce914d44       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   bb3baae21d174       csi-hostpathplugin-bgmwp                   kube-system
	c4e122d9cd92b       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             3 minutes ago            Running             controller                               0                   1c49a9f3d02ce       ingress-nginx-controller-6c8bf45fb-pkh2n   ingress-nginx
	e8d8201b249e1       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   581b238386602       gcp-auth-78565c9fb4-2jxmt                  gcp-auth
	f81bf2fb9f067       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            3 minutes ago            Running             gadget                                   0                   46bf47846e0ed       gadget-xjjvh                               gadget
	9af2707445bb4       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   bb3baae21d174       csi-hostpathplugin-bgmwp                   kube-system
	f1836e8795e70       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   aec4962db8d63       registry-proxy-v264t                       kube-system
	bcef0ab7ff5ee       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   f313cd20b2a72       csi-hostpath-attacher-0                    kube-system
	cdab3db25b699       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   4193d0241873f       yakd-dashboard-5ff678cb9-t6r26             yakd-dashboard
	53665e5932341       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             3 minutes ago            Exited              patch                                    2                   1c1daf467bbc7       ingress-nginx-admission-patch-gn8kb        ingress-nginx
	87ca81f329f99       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   f9242f30df386       snapshot-controller-7d9fbc56b8-dwczj       kube-system
	a3a1370782e11       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   990abaec552af       local-path-provisioner-648f6765c9-h9x7r    local-path-storage
	e055a401fe670       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   9a612b4b01bb4       nvidia-device-plugin-daemonset-j7cvq       kube-system
	40df276efacb0       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   bb4faa353df6a       snapshot-controller-7d9fbc56b8-b6xbm       kube-system
	be428aa3a2a99       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   4 minutes ago            Exited              create                                   0                   be3e4d9577b59       ingress-nginx-admission-create-bjhlt       ingress-nginx
	99c706b4665c8       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              4 minutes ago            Running             csi-resizer                              0                   bb08308253b2b       csi-hostpath-resizer-0                     kube-system
	dd5492a96c1be       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               4 minutes ago            Running             cloud-spanner-emulator                   0                   0a6826b5451db       cloud-spanner-emulator-5bdddb765-gp9qf     default
	e9e5ba99ab47b       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   4 minutes ago            Running             csi-external-health-monitor-controller   0                   bb3baae21d174       csi-hostpathplugin-bgmwp                   kube-system
	4bf7144c7e3cb       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           4 minutes ago            Running             registry                                 0                   4189cc4637430       registry-6b586f9694-fhxm7                  kube-system
	e0d73582da9fb       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               4 minutes ago            Running             minikube-ingress-dns                     0                   38c7a6f396cd1       kube-ingress-dns-minikube                  kube-system
	d731175eced00       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   241a1bbad6257       metrics-server-85b7d694d7-k5xvk            kube-system
	83cadb364a123       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   dec83e73546e1       storage-provisioner                        kube-system
	122b5b3da819c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   45083915f69a1       coredns-66bc5c9577-8cjzz                   kube-system
	8549937bedbf1       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             5 minutes ago            Running             kube-proxy                               0                   ea682d6f2cc0e       kube-proxy-5qvwc                           kube-system
	243940847a312       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             5 minutes ago            Running             kindnet-cni                              0                   37d03133a0aac       kindnet-l29tl                              kube-system
	78b45913b9ad7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   c4a55db6a6390       etcd-addons-153780                         kube-system
	338b16a84542c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   b966f0f5f7f5d       kube-apiserver-addons-153780               kube-system
	9d6da0d20171f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   69ee129826d8d       kube-controller-manager-addons-153780      kube-system
	44d60dc8fd9be       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   2dfeff642656b       kube-scheduler-addons-153780               kube-system
	
	
	==> coredns [122b5b3da819c62b28e08a0e8f492efb939d921f5cc87f9f5dd30d4354f4c824] <==
	[INFO] 10.244.0.15:36810 - 13947 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001673433s
	[INFO] 10.244.0.15:36810 - 14776 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00026159s
	[INFO] 10.244.0.15:36810 - 28268 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000193626s
	[INFO] 10.244.0.15:56159 - 14413 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000172285s
	[INFO] 10.244.0.15:56159 - 14166 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000252376s
	[INFO] 10.244.0.15:42234 - 56094 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000135345s
	[INFO] 10.244.0.15:42234 - 56547 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0001761s
	[INFO] 10.244.0.15:48931 - 758 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000130504s
	[INFO] 10.244.0.15:48931 - 322 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00011356s
	[INFO] 10.244.0.15:60061 - 58399 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001179578s
	[INFO] 10.244.0.15:60061 - 58852 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001178996s
	[INFO] 10.244.0.15:56263 - 41724 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000153019s
	[INFO] 10.244.0.15:56263 - 41312 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000228358s
	[INFO] 10.244.0.20:39443 - 34542 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000187808s
	[INFO] 10.244.0.20:34755 - 36988 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000194413s
	[INFO] 10.244.0.20:57268 - 49970 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000115217s
	[INFO] 10.244.0.20:42364 - 61614 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000378268s
	[INFO] 10.244.0.20:35784 - 33968 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000120994s
	[INFO] 10.244.0.20:39222 - 41340 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000126713s
	[INFO] 10.244.0.20:60569 - 52252 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002022343s
	[INFO] 10.244.0.20:40069 - 627 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002995158s
	[INFO] 10.244.0.20:46483 - 28196 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000954494s
	[INFO] 10.244.0.20:38857 - 52313 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001736523s
	[INFO] 10.244.0.23:39001 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000184478s
	[INFO] 10.244.0.23:34598 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000091791s
	
	
	==> describe nodes <==
	Name:               addons-153780
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-153780
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=addons-153780
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_14_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-153780
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-153780"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:14:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-153780
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:19:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:19:27 +0000   Mon, 24 Nov 2025 03:14:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:19:27 +0000   Mon, 24 Nov 2025 03:14:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:19:27 +0000   Mon, 24 Nov 2025 03:14:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:19:27 +0000   Mon, 24 Nov 2025 03:15:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-153780
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 304a86241bf1bbb85bd31db5692386d7
	  System UUID:                d03a0b76-1e9c-4c87-8eaa-1652e42b6d37
	  Boot ID:                    e6ca431c-3a35-478f-87f6-f49cc4bc8a65
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m26s
	  default                     cloud-spanner-emulator-5bdddb765-gp9qf      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  default                     hello-world-app-5d498dc89-ggg4f             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  gadget                      gadget-xjjvh                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  gcp-auth                    gcp-auth-78565c9fb4-2jxmt                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-pkh2n    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m4s
	  kube-system                 coredns-66bc5c9577-8cjzz                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m10s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 csi-hostpathplugin-bgmwp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 etcd-addons-153780                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m15s
	  kube-system                 kindnet-l29tl                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m10s
	  kube-system                 kube-apiserver-addons-153780                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 kube-controller-manager-addons-153780       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-proxy-5qvwc                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 kube-scheduler-addons-153780                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 metrics-server-85b7d694d7-k5xvk             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m5s
	  kube-system                 nvidia-device-plugin-daemonset-j7cvq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 registry-6b586f9694-fhxm7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 registry-creds-764b6fb674-bk79n             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 registry-proxy-v264t                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 snapshot-controller-7d9fbc56b8-b6xbm        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 snapshot-controller-7d9fbc56b8-dwczj        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  local-path-storage          local-path-provisioner-648f6765c9-h9x7r     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-t6r26              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m8s                   kube-proxy       
	  Normal   Starting                 5m22s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m22s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m22s (x8 over 5m22s)  kubelet          Node addons-153780 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m22s (x8 over 5m22s)  kubelet          Node addons-153780 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m22s (x8 over 5m22s)  kubelet          Node addons-153780 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m15s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m15s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m15s                  kubelet          Node addons-153780 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m15s                  kubelet          Node addons-153780 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m15s                  kubelet          Node addons-153780 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m11s                  node-controller  Node addons-153780 event: Registered Node addons-153780 in Controller
	  Normal   NodeReady                4m28s                  kubelet          Node addons-153780 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 01:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014604] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.520213] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036736] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.794505] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.307568] kauditd_printk_skb: 36 callbacks suppressed
	[Nov24 03:08] hrtimer: interrupt took 4583507 ns
	[Nov24 03:11] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 03:14] overlayfs: idmapped layers are currently not supported
	[  +0.056945] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [78b45913b9ad783248086bdaf8ec66bb8abed626933418187bc0d43337f720ce] <==
	{"level":"warn","ts":"2025-11-24T03:14:16.106132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.126861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.141355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.158568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.172950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.195137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.211023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.234807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.256072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.265739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.283612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.299482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.315220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.329111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.341565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.376827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.389732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.412442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.502674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:32.534779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:32.555207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:54.457595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:54.470978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:54.501931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:54.524240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51792","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [e8d8201b249e18df44325c8535a62429e576a461eb3c2193e682d2a5750823fd] <==
	2025/11/24 03:15:55 GCP Auth Webhook started!
	2025/11/24 03:16:09 Ready to marshal response ...
	2025/11/24 03:16:09 Ready to write response ...
	2025/11/24 03:16:09 Ready to marshal response ...
	2025/11/24 03:16:09 Ready to write response ...
	2025/11/24 03:16:09 Ready to marshal response ...
	2025/11/24 03:16:09 Ready to write response ...
	2025/11/24 03:16:31 Ready to marshal response ...
	2025/11/24 03:16:31 Ready to write response ...
	2025/11/24 03:16:39 Ready to marshal response ...
	2025/11/24 03:16:39 Ready to write response ...
	2025/11/24 03:16:42 Ready to marshal response ...
	2025/11/24 03:16:42 Ready to write response ...
	2025/11/24 03:16:42 Ready to marshal response ...
	2025/11/24 03:16:42 Ready to write response ...
	2025/11/24 03:16:51 Ready to marshal response ...
	2025/11/24 03:16:51 Ready to write response ...
	2025/11/24 03:17:07 Ready to marshal response ...
	2025/11/24 03:17:07 Ready to write response ...
	2025/11/24 03:17:13 Ready to marshal response ...
	2025/11/24 03:17:13 Ready to write response ...
	2025/11/24 03:19:33 Ready to marshal response ...
	2025/11/24 03:19:33 Ready to write response ...
	
	
	==> kernel <==
	 03:19:35 up  2:01,  0 user,  load average: 0.88, 1.88, 2.67
	Linux addons-153780 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [243940847a312c0327ab19daf598258d6deca45c2a27efb8e89e875e0d87c684] <==
	I1124 03:17:26.620661       1 main.go:301] handling current node
	I1124 03:17:36.622110       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:17:36.622142       1 main.go:301] handling current node
	I1124 03:17:46.628387       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:17:46.628422       1 main.go:301] handling current node
	I1124 03:17:56.619844       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:17:56.619909       1 main.go:301] handling current node
	I1124 03:18:06.624670       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:18:06.624707       1 main.go:301] handling current node
	I1124 03:18:16.619838       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:18:16.619874       1 main.go:301] handling current node
	I1124 03:18:26.622688       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:18:26.622819       1 main.go:301] handling current node
	I1124 03:18:36.627902       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:18:36.627937       1 main.go:301] handling current node
	I1124 03:18:46.620866       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:18:46.620901       1 main.go:301] handling current node
	I1124 03:18:56.619881       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:18:56.619916       1 main.go:301] handling current node
	I1124 03:19:06.623545       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:19:06.623585       1 main.go:301] handling current node
	I1124 03:19:16.629295       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:19:16.629334       1 main.go:301] handling current node
	I1124 03:19:26.620069       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:19:26.620102       1 main.go:301] handling current node
	
	
	==> kube-apiserver [338b16a84542c2f340889c798911ea671b465c1749548d747c2fff6223b15a89] <==
	W1124 03:14:54.501915       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1124 03:14:54.517881       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1124 03:15:07.106159       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.80.42:443: connect: connection refused
	E1124 03:15:07.106299       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.80.42:443: connect: connection refused" logger="UnhandledError"
	W1124 03:15:07.109609       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.80.42:443: connect: connection refused
	E1124 03:15:07.109713       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.80.42:443: connect: connection refused" logger="UnhandledError"
	W1124 03:15:07.188692       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.80.42:443: connect: connection refused
	E1124 03:15:07.188733       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.80.42:443: connect: connection refused" logger="UnhandledError"
	W1124 03:15:11.827747       1 handler_proxy.go:99] no RequestInfo found in the context
	E1124 03:15:11.827828       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1124 03:15:11.829008       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.66.229:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.66.229:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.66.229:443: connect: connection refused" logger="UnhandledError"
	E1124 03:15:11.839554       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.66.229:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.66.229:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.66.229:443: connect: connection refused" logger="UnhandledError"
	E1124 03:15:11.840255       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.66.229:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.66.229:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.66.229:443: connect: connection refused" logger="UnhandledError"
	E1124 03:15:11.851034       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.66.229:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.66.229:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.66.229:443: connect: connection refused" logger="UnhandledError"
	I1124 03:15:11.996285       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1124 03:16:18.969642       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:60580: use of closed network connection
	E1124 03:16:19.216840       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:60606: use of closed network connection
	E1124 03:16:19.352296       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:60636: use of closed network connection
	I1124 03:16:51.435680       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1124 03:17:13.415188       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1124 03:17:13.719511       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.193.133"}
	I1124 03:19:33.617836       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.209.123"}
	
	
	==> kube-controller-manager [9d6da0d20171f4ca6805881f1293dd5146dfe8c43773853626d869062a6287b6] <==
	I1124 03:14:24.468001       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:14:24.469134       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 03:14:24.469191       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 03:14:24.469236       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 03:14:24.469630       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 03:14:24.469874       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 03:14:24.471608       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 03:14:24.471831       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 03:14:24.472348       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 03:14:24.472652       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 03:14:24.478962       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 03:14:24.481426       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:14:24.487628       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 03:14:24.518009       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:14:24.518096       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 03:14:24.518126       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1124 03:14:30.499025       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1124 03:14:54.448212       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1124 03:14:54.448366       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1124 03:14:54.448425       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1124 03:14:54.490404       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1124 03:14:54.494390       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1124 03:14:54.549256       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:14:54.595578       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:15:09.460078       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [8549937bedbf1ccf4f98e083e6dcbaa5697df18822718fd0d27b79e6e76f8942] <==
	I1124 03:14:26.676486       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:14:26.761007       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:14:26.862652       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:14:26.862721       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 03:14:26.862829       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:14:26.922428       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:14:26.922534       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:14:26.932335       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:14:26.932667       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:14:26.932682       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:14:26.933952       1 config.go:200] "Starting service config controller"
	I1124 03:14:26.933962       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:14:26.933977       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:14:26.933981       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:14:26.933995       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:14:26.933999       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:14:26.937967       1 config.go:309] "Starting node config controller"
	I1124 03:14:26.937985       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:14:26.937993       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 03:14:27.034446       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 03:14:27.034509       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 03:14:27.034574       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [44d60dc8fd9beb0e354e25ab604f6137163390956f8c04bf67e790db61625fb4] <==
	I1124 03:14:17.774550       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1124 03:14:17.773901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 03:14:17.773844       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 03:14:17.777177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 03:14:17.777462       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 03:14:17.777595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 03:14:17.777720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 03:14:17.777837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 03:14:17.777952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 03:14:17.778063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 03:14:17.778176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 03:14:17.778278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 03:14:17.778521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1124 03:14:17.778587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 03:14:17.784050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 03:14:17.784170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 03:14:17.784175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 03:14:17.784224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 03:14:17.784229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 03:14:17.784274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 03:14:18.607879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 03:14:18.616901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 03:14:18.658416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 03:14:18.662011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1124 03:14:19.376972       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:17:20 addons-153780 kubelet[1282]: I1124 03:17:20.451443    1282 scope.go:117] "RemoveContainer" containerID="1c7bb03f866f7f401e99ffd1ae6f47f6ee627fb2d29c68c1c3597fecac723ed6"
	Nov 24 03:17:20 addons-153780 kubelet[1282]: I1124 03:17:20.464219    1282 scope.go:117] "RemoveContainer" containerID="f64cf3831b44b12f9a38fad66e7242bb953765581237fa1d577d781eec0258fd"
	Nov 24 03:17:28 addons-153780 kubelet[1282]: I1124 03:17:28.453493    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-fhxm7" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 03:18:03 addons-153780 kubelet[1282]: I1124 03:18:03.453778    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-v264t" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 03:18:28 addons-153780 kubelet[1282]: I1124 03:18:28.453985    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-j7cvq" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 03:18:30 addons-153780 kubelet[1282]: I1124 03:18:30.453913    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-fhxm7" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 03:19:17 addons-153780 kubelet[1282]: I1124 03:19:17.354478    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-bk79n" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 03:19:18 addons-153780 kubelet[1282]: I1124 03:19:18.976679    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-bk79n" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 03:19:18 addons-153780 kubelet[1282]: I1124 03:19:18.976738    1282 scope.go:117] "RemoveContainer" containerID="3dee1314f0b972557805eca0b502a0a5cbb627074300dbac593d4e17aaa4fff8"
	Nov 24 03:19:19 addons-153780 kubelet[1282]: I1124 03:19:19.011141    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=124.136747727 podStartE2EDuration="2m6.011105702s" podCreationTimestamp="2025-11-24 03:17:13 +0000 UTC" firstStartedPulling="2025-11-24 03:17:14.001487327 +0000 UTC m=+173.715485154" lastFinishedPulling="2025-11-24 03:17:15.875845318 +0000 UTC m=+175.589843129" observedRunningTime="2025-11-24 03:17:16.592309466 +0000 UTC m=+176.306307277" watchObservedRunningTime="2025-11-24 03:19:19.011105702 +0000 UTC m=+298.725103529"
	Nov 24 03:19:19 addons-153780 kubelet[1282]: I1124 03:19:19.982023    1282 scope.go:117] "RemoveContainer" containerID="3dee1314f0b972557805eca0b502a0a5cbb627074300dbac593d4e17aaa4fff8"
	Nov 24 03:19:19 addons-153780 kubelet[1282]: I1124 03:19:19.982130    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-bk79n" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 03:19:19 addons-153780 kubelet[1282]: I1124 03:19:19.982643    1282 scope.go:117] "RemoveContainer" containerID="e96c9cadc4eb749918b9faf2963efb4ed6f34f4c19f2e0de31a40315fd0499f0"
	Nov 24 03:19:19 addons-153780 kubelet[1282]: E1124 03:19:19.982814    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-bk79n_kube-system(dc3ac97a-2ca5-48ca-9f54-00d5127f5172)\"" pod="kube-system/registry-creds-764b6fb674-bk79n" podUID="dc3ac97a-2ca5-48ca-9f54-00d5127f5172"
	Nov 24 03:19:20 addons-153780 kubelet[1282]: I1124 03:19:20.987616    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-bk79n" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 03:19:20 addons-153780 kubelet[1282]: I1124 03:19:20.987678    1282 scope.go:117] "RemoveContainer" containerID="e96c9cadc4eb749918b9faf2963efb4ed6f34f4c19f2e0de31a40315fd0499f0"
	Nov 24 03:19:20 addons-153780 kubelet[1282]: E1124 03:19:20.987819    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-bk79n_kube-system(dc3ac97a-2ca5-48ca-9f54-00d5127f5172)\"" pod="kube-system/registry-creds-764b6fb674-bk79n" podUID="dc3ac97a-2ca5-48ca-9f54-00d5127f5172"
	Nov 24 03:19:31 addons-153780 kubelet[1282]: I1124 03:19:31.453569    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-v264t" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 03:19:33 addons-153780 kubelet[1282]: I1124 03:19:33.567319    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xbj9\" (UniqueName: \"kubernetes.io/projected/0ddc73ea-5e6f-45e4-972f-433cfa259e2f-kube-api-access-9xbj9\") pod \"hello-world-app-5d498dc89-ggg4f\" (UID: \"0ddc73ea-5e6f-45e4-972f-433cfa259e2f\") " pod="default/hello-world-app-5d498dc89-ggg4f"
	Nov 24 03:19:33 addons-153780 kubelet[1282]: I1124 03:19:33.567401    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0ddc73ea-5e6f-45e4-972f-433cfa259e2f-gcp-creds\") pod \"hello-world-app-5d498dc89-ggg4f\" (UID: \"0ddc73ea-5e6f-45e4-972f-433cfa259e2f\") " pod="default/hello-world-app-5d498dc89-ggg4f"
	Nov 24 03:19:33 addons-153780 kubelet[1282]: W1124 03:19:33.815887    1282 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c475d9049df5b92be1c834e60dc55fabb2e8bfcb838c569945730e8173f565a4/crio-495e56062811964d302e4fb1f4d318d3f798f9bc083212b0682a13bfba0cfc4c WatchSource:0}: Error finding container 495e56062811964d302e4fb1f4d318d3f798f9bc083212b0682a13bfba0cfc4c: Status 404 returned error can't find the container with id 495e56062811964d302e4fb1f4d318d3f798f9bc083212b0682a13bfba0cfc4c
	Nov 24 03:19:34 addons-153780 kubelet[1282]: I1124 03:19:34.455384    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-j7cvq" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 03:19:35 addons-153780 kubelet[1282]: I1124 03:19:35.063292    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-ggg4f" podStartSLOduration=1.452512429 podStartE2EDuration="2.06327096s" podCreationTimestamp="2025-11-24 03:19:33 +0000 UTC" firstStartedPulling="2025-11-24 03:19:33.81942191 +0000 UTC m=+313.533419721" lastFinishedPulling="2025-11-24 03:19:34.430180441 +0000 UTC m=+314.144178252" observedRunningTime="2025-11-24 03:19:35.062090412 +0000 UTC m=+314.776088231" watchObservedRunningTime="2025-11-24 03:19:35.06327096 +0000 UTC m=+314.777268771"
	Nov 24 03:19:35 addons-153780 kubelet[1282]: I1124 03:19:35.453426    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-bk79n" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 03:19:35 addons-153780 kubelet[1282]: I1124 03:19:35.453500    1282 scope.go:117] "RemoveContainer" containerID="e96c9cadc4eb749918b9faf2963efb4ed6f34f4c19f2e0de31a40315fd0499f0"
	
	
	==> storage-provisioner [83cadb364a123d7398e5dd2800753acc7d78782c7f5f032824ec9d6dff61065d] <==
	W1124 03:19:11.682432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:19:13.685415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:19:13.691795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:19:15.695099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:19:15.699618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:19:17.704240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:19:17.711091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:19:19.714826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:19:19.721461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:19:21.724320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:19:21.731404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:19:23.734717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:19:23.739039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:19:25.741962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:19:25.746479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:19:27.749713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:19:27.756153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:19:29.760113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:19:29.764730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:19:31.767606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:19:31.771932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:19:33.775541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:19:33.782728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:19:35.786724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:19:35.793278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-153780 -n addons-153780
helpers_test.go:269: (dbg) Run:  kubectl --context addons-153780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-bjhlt ingress-nginx-admission-patch-gn8kb
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-153780 describe pod ingress-nginx-admission-create-bjhlt ingress-nginx-admission-patch-gn8kb
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-153780 describe pod ingress-nginx-admission-create-bjhlt ingress-nginx-admission-patch-gn8kb: exit status 1 (91.421307ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-bjhlt" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-gn8kb" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-153780 describe pod ingress-nginx-admission-create-bjhlt ingress-nginx-admission-patch-gn8kb: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-153780 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-153780 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (333.966674ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:19:36.877542  302044 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:19:36.878393  302044 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:19:36.878436  302044 out.go:374] Setting ErrFile to fd 2...
	I1124 03:19:36.878491  302044 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:19:36.878791  302044 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 03:19:36.879128  302044 mustload.go:66] Loading cluster: addons-153780
	I1124 03:19:36.879542  302044 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:19:36.879589  302044 addons.go:622] checking whether the cluster is paused
	I1124 03:19:36.879723  302044 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:19:36.879761  302044 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:19:36.880329  302044 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:19:36.911084  302044 ssh_runner.go:195] Run: systemctl --version
	I1124 03:19:36.911150  302044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:19:36.929750  302044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:19:37.034508  302044 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:19:37.034616  302044 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:19:37.105898  302044 cri.go:89] found id: "84c77e0379e123d1dd895b17cd905e9ef57671b82003f9a1114c0b008ba59935"
	I1124 03:19:37.106027  302044 cri.go:89] found id: "3485678af5d19bae1d4411bec346a97566826570b9acca73e66fcd515e5dbfdb"
	I1124 03:19:37.106033  302044 cri.go:89] found id: "7548077813b01fb75c6d1c88996c76b7faee2fe33dde991c58f565ccb1427ad5"
	I1124 03:19:37.106045  302044 cri.go:89] found id: "8afdc0c7272e751be59501860afc29e8a7a532529fda38a20c97838336ba67b7"
	I1124 03:19:37.106057  302044 cri.go:89] found id: "29d02ce914d44a4b8329512d1a23e85804b768d884d99b075e34cf6e14dac788"
	I1124 03:19:37.106062  302044 cri.go:89] found id: "9af2707445bb441233c326d9e3bf3966ba13a26a89b7254b24679f387b0b5bc0"
	I1124 03:19:37.106066  302044 cri.go:89] found id: "f1836e8795e70fdafa512b9e9d8802dc1dd32e3cbeed9fc49f65e08646f91bfc"
	I1124 03:19:37.106069  302044 cri.go:89] found id: "bcef0ab7ff5ee5c5f49176f121fab57276a8e6c2d6f1ad04f8043b241c47d281"
	I1124 03:19:37.106072  302044 cri.go:89] found id: "87ca81f329f9944d4aa8992fd39e8df84ad075a50b258501c5b36393d9b8490f"
	I1124 03:19:37.106079  302044 cri.go:89] found id: "e055a401fe670aa2c020065d05570646e820163cf83b991f8ce8c75cac7a531b"
	I1124 03:19:37.106087  302044 cri.go:89] found id: "40df276efacb02cb1f87ef6c0cd3435fa8719cfd11c9a1fcd645347d1bbf431d"
	I1124 03:19:37.106106  302044 cri.go:89] found id: "99c706b4665c8c75b900e2aafef4cb98d48a526abe0c79cf85a0c92d29c9bc8e"
	I1124 03:19:37.106115  302044 cri.go:89] found id: "e9e5ba99ab47bcb26e5d61dc6a04f4f4259bdc62676c462b2e122037b199d6dd"
	I1124 03:19:37.106119  302044 cri.go:89] found id: "4bf7144c7e3cbfd4ce77529d316b328c7ebd75692d7023e192f903097f99167f"
	I1124 03:19:37.106122  302044 cri.go:89] found id: "e0d73582da9fb4ef7041d1e171499fdca30fab4e8fa2887ad289666b528e8c73"
	I1124 03:19:37.106132  302044 cri.go:89] found id: "d731175eced009ea8d73a26124f854679fe6a04beaf289f7a3a7635ccfa7155a"
	I1124 03:19:37.106141  302044 cri.go:89] found id: "83cadb364a123d7398e5dd2800753acc7d78782c7f5f032824ec9d6dff61065d"
	I1124 03:19:37.106151  302044 cri.go:89] found id: "122b5b3da819c62b28e08a0e8f492efb939d921f5cc87f9f5dd30d4354f4c824"
	I1124 03:19:37.106155  302044 cri.go:89] found id: "8549937bedbf1ccf4f98e083e6dcbaa5697df18822718fd0d27b79e6e76f8942"
	I1124 03:19:37.106158  302044 cri.go:89] found id: "243940847a312c0327ab19daf598258d6deca45c2a27efb8e89e875e0d87c684"
	I1124 03:19:37.106163  302044 cri.go:89] found id: "78b45913b9ad783248086bdaf8ec66bb8abed626933418187bc0d43337f720ce"
	I1124 03:19:37.106166  302044 cri.go:89] found id: "338b16a84542c2f340889c798911ea671b465c1749548d747c2fff6223b15a89"
	I1124 03:19:37.106170  302044 cri.go:89] found id: "9d6da0d20171f4ca6805881f1293dd5146dfe8c43773853626d869062a6287b6"
	I1124 03:19:37.106180  302044 cri.go:89] found id: "44d60dc8fd9beb0e354e25ab604f6137163390956f8c04bf67e790db61625fb4"
	I1124 03:19:37.106199  302044 cri.go:89] found id: ""
	I1124 03:19:37.106287  302044 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:19:37.124959  302044 out.go:203] 
	W1124 03:19:37.127048  302044 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:19:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:19:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 03:19:37.127087  302044 out.go:285] * 
	* 
	W1124 03:19:37.132739  302044 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 03:19:37.134984  302044 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-153780 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-153780 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-153780 addons disable ingress --alsologtostderr -v=1: exit status 11 (266.894783ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:19:37.198493  302154 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:19:37.199330  302154 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:19:37.199345  302154 out.go:374] Setting ErrFile to fd 2...
	I1124 03:19:37.199352  302154 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:19:37.199630  302154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 03:19:37.199954  302154 mustload.go:66] Loading cluster: addons-153780
	I1124 03:19:37.200341  302154 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:19:37.200360  302154 addons.go:622] checking whether the cluster is paused
	I1124 03:19:37.200468  302154 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:19:37.200484  302154 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:19:37.200984  302154 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:19:37.219074  302154 ssh_runner.go:195] Run: systemctl --version
	I1124 03:19:37.219138  302154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:19:37.237920  302154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:19:37.341453  302154 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:19:37.341554  302154 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:19:37.371558  302154 cri.go:89] found id: "84c77e0379e123d1dd895b17cd905e9ef57671b82003f9a1114c0b008ba59935"
	I1124 03:19:37.371580  302154 cri.go:89] found id: "3485678af5d19bae1d4411bec346a97566826570b9acca73e66fcd515e5dbfdb"
	I1124 03:19:37.371586  302154 cri.go:89] found id: "7548077813b01fb75c6d1c88996c76b7faee2fe33dde991c58f565ccb1427ad5"
	I1124 03:19:37.371589  302154 cri.go:89] found id: "8afdc0c7272e751be59501860afc29e8a7a532529fda38a20c97838336ba67b7"
	I1124 03:19:37.371593  302154 cri.go:89] found id: "29d02ce914d44a4b8329512d1a23e85804b768d884d99b075e34cf6e14dac788"
	I1124 03:19:37.371596  302154 cri.go:89] found id: "9af2707445bb441233c326d9e3bf3966ba13a26a89b7254b24679f387b0b5bc0"
	I1124 03:19:37.371599  302154 cri.go:89] found id: "f1836e8795e70fdafa512b9e9d8802dc1dd32e3cbeed9fc49f65e08646f91bfc"
	I1124 03:19:37.371602  302154 cri.go:89] found id: "bcef0ab7ff5ee5c5f49176f121fab57276a8e6c2d6f1ad04f8043b241c47d281"
	I1124 03:19:37.371605  302154 cri.go:89] found id: "87ca81f329f9944d4aa8992fd39e8df84ad075a50b258501c5b36393d9b8490f"
	I1124 03:19:37.371610  302154 cri.go:89] found id: "e055a401fe670aa2c020065d05570646e820163cf83b991f8ce8c75cac7a531b"
	I1124 03:19:37.371614  302154 cri.go:89] found id: "40df276efacb02cb1f87ef6c0cd3435fa8719cfd11c9a1fcd645347d1bbf431d"
	I1124 03:19:37.371617  302154 cri.go:89] found id: "99c706b4665c8c75b900e2aafef4cb98d48a526abe0c79cf85a0c92d29c9bc8e"
	I1124 03:19:37.371620  302154 cri.go:89] found id: "e9e5ba99ab47bcb26e5d61dc6a04f4f4259bdc62676c462b2e122037b199d6dd"
	I1124 03:19:37.371623  302154 cri.go:89] found id: "4bf7144c7e3cbfd4ce77529d316b328c7ebd75692d7023e192f903097f99167f"
	I1124 03:19:37.371626  302154 cri.go:89] found id: "e0d73582da9fb4ef7041d1e171499fdca30fab4e8fa2887ad289666b528e8c73"
	I1124 03:19:37.371631  302154 cri.go:89] found id: "d731175eced009ea8d73a26124f854679fe6a04beaf289f7a3a7635ccfa7155a"
	I1124 03:19:37.371639  302154 cri.go:89] found id: "83cadb364a123d7398e5dd2800753acc7d78782c7f5f032824ec9d6dff61065d"
	I1124 03:19:37.371643  302154 cri.go:89] found id: "122b5b3da819c62b28e08a0e8f492efb939d921f5cc87f9f5dd30d4354f4c824"
	I1124 03:19:37.371646  302154 cri.go:89] found id: "8549937bedbf1ccf4f98e083e6dcbaa5697df18822718fd0d27b79e6e76f8942"
	I1124 03:19:37.371650  302154 cri.go:89] found id: "243940847a312c0327ab19daf598258d6deca45c2a27efb8e89e875e0d87c684"
	I1124 03:19:37.371654  302154 cri.go:89] found id: "78b45913b9ad783248086bdaf8ec66bb8abed626933418187bc0d43337f720ce"
	I1124 03:19:37.371657  302154 cri.go:89] found id: "338b16a84542c2f340889c798911ea671b465c1749548d747c2fff6223b15a89"
	I1124 03:19:37.371660  302154 cri.go:89] found id: "9d6da0d20171f4ca6805881f1293dd5146dfe8c43773853626d869062a6287b6"
	I1124 03:19:37.371663  302154 cri.go:89] found id: "44d60dc8fd9beb0e354e25ab604f6137163390956f8c04bf67e790db61625fb4"
	I1124 03:19:37.371666  302154 cri.go:89] found id: ""
	I1124 03:19:37.371714  302154 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:19:37.385922  302154 out.go:203] 
	W1124 03:19:37.388594  302154 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:19:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:19:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 03:19:37.388630  302154 out.go:285] * 
	* 
	W1124 03:19:37.396817  302154 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 03:19:37.399505  302154 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-153780 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (144.31s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-xjjvh" [86cb35f7-04cb-4f34-a186-e37cc7c18c26] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003627071s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-153780 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-153780 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (271.722762ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:17:07.516574  300151 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:17:07.517929  300151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:17:07.517956  300151 out.go:374] Setting ErrFile to fd 2...
	I1124 03:17:07.517963  300151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:17:07.518235  300151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 03:17:07.518655  300151 mustload.go:66] Loading cluster: addons-153780
	I1124 03:17:07.519055  300151 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:17:07.519074  300151 addons.go:622] checking whether the cluster is paused
	I1124 03:17:07.519183  300151 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:17:07.519198  300151 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:17:07.519711  300151 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:17:07.543196  300151 ssh_runner.go:195] Run: systemctl --version
	I1124 03:17:07.543255  300151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:17:07.561709  300151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:17:07.665556  300151 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:17:07.665650  300151 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:17:07.698206  300151 cri.go:89] found id: "3485678af5d19bae1d4411bec346a97566826570b9acca73e66fcd515e5dbfdb"
	I1124 03:17:07.698231  300151 cri.go:89] found id: "7548077813b01fb75c6d1c88996c76b7faee2fe33dde991c58f565ccb1427ad5"
	I1124 03:17:07.698236  300151 cri.go:89] found id: "8afdc0c7272e751be59501860afc29e8a7a532529fda38a20c97838336ba67b7"
	I1124 03:17:07.698240  300151 cri.go:89] found id: "29d02ce914d44a4b8329512d1a23e85804b768d884d99b075e34cf6e14dac788"
	I1124 03:17:07.698243  300151 cri.go:89] found id: "9af2707445bb441233c326d9e3bf3966ba13a26a89b7254b24679f387b0b5bc0"
	I1124 03:17:07.698249  300151 cri.go:89] found id: "f1836e8795e70fdafa512b9e9d8802dc1dd32e3cbeed9fc49f65e08646f91bfc"
	I1124 03:17:07.698252  300151 cri.go:89] found id: "bcef0ab7ff5ee5c5f49176f121fab57276a8e6c2d6f1ad04f8043b241c47d281"
	I1124 03:17:07.698255  300151 cri.go:89] found id: "87ca81f329f9944d4aa8992fd39e8df84ad075a50b258501c5b36393d9b8490f"
	I1124 03:17:07.698258  300151 cri.go:89] found id: "e055a401fe670aa2c020065d05570646e820163cf83b991f8ce8c75cac7a531b"
	I1124 03:17:07.698275  300151 cri.go:89] found id: "40df276efacb02cb1f87ef6c0cd3435fa8719cfd11c9a1fcd645347d1bbf431d"
	I1124 03:17:07.698280  300151 cri.go:89] found id: "99c706b4665c8c75b900e2aafef4cb98d48a526abe0c79cf85a0c92d29c9bc8e"
	I1124 03:17:07.698283  300151 cri.go:89] found id: "e9e5ba99ab47bcb26e5d61dc6a04f4f4259bdc62676c462b2e122037b199d6dd"
	I1124 03:17:07.698286  300151 cri.go:89] found id: "4bf7144c7e3cbfd4ce77529d316b328c7ebd75692d7023e192f903097f99167f"
	I1124 03:17:07.698289  300151 cri.go:89] found id: "e0d73582da9fb4ef7041d1e171499fdca30fab4e8fa2887ad289666b528e8c73"
	I1124 03:17:07.698292  300151 cri.go:89] found id: "d731175eced009ea8d73a26124f854679fe6a04beaf289f7a3a7635ccfa7155a"
	I1124 03:17:07.698297  300151 cri.go:89] found id: "83cadb364a123d7398e5dd2800753acc7d78782c7f5f032824ec9d6dff61065d"
	I1124 03:17:07.698301  300151 cri.go:89] found id: "122b5b3da819c62b28e08a0e8f492efb939d921f5cc87f9f5dd30d4354f4c824"
	I1124 03:17:07.698304  300151 cri.go:89] found id: "8549937bedbf1ccf4f98e083e6dcbaa5697df18822718fd0d27b79e6e76f8942"
	I1124 03:17:07.698307  300151 cri.go:89] found id: "243940847a312c0327ab19daf598258d6deca45c2a27efb8e89e875e0d87c684"
	I1124 03:17:07.698310  300151 cri.go:89] found id: "78b45913b9ad783248086bdaf8ec66bb8abed626933418187bc0d43337f720ce"
	I1124 03:17:07.698316  300151 cri.go:89] found id: "338b16a84542c2f340889c798911ea671b465c1749548d747c2fff6223b15a89"
	I1124 03:17:07.698319  300151 cri.go:89] found id: "9d6da0d20171f4ca6805881f1293dd5146dfe8c43773853626d869062a6287b6"
	I1124 03:17:07.698322  300151 cri.go:89] found id: "44d60dc8fd9beb0e354e25ab604f6137163390956f8c04bf67e790db61625fb4"
	I1124 03:17:07.698327  300151 cri.go:89] found id: ""
	I1124 03:17:07.698379  300151 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:17:07.714043  300151 out.go:203] 
	W1124 03:17:07.717053  300151 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:17:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:17:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 03:17:07.717093  300151 out.go:285] * 
	* 
	W1124 03:17:07.722742  300151 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 03:17:07.725797  300151 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-153780 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.36s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 5.447257ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-k5xvk" [9b5678eb-b6ce-4ee5-bdb6-92da24f445f3] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003212044s
addons_test.go:463: (dbg) Run:  kubectl --context addons-153780 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-153780 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-153780 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (263.758104ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:17:12.884800  300247 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:17:12.885630  300247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:17:12.885646  300247 out.go:374] Setting ErrFile to fd 2...
	I1124 03:17:12.885652  300247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:17:12.885919  300247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 03:17:12.886242  300247 mustload.go:66] Loading cluster: addons-153780
	I1124 03:17:12.886666  300247 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:17:12.886687  300247 addons.go:622] checking whether the cluster is paused
	I1124 03:17:12.886798  300247 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:17:12.886812  300247 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:17:12.887354  300247 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:17:12.904454  300247 ssh_runner.go:195] Run: systemctl --version
	I1124 03:17:12.904525  300247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:17:12.924676  300247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:17:13.029157  300247 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:17:13.029243  300247 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:17:13.062430  300247 cri.go:89] found id: "3485678af5d19bae1d4411bec346a97566826570b9acca73e66fcd515e5dbfdb"
	I1124 03:17:13.062534  300247 cri.go:89] found id: "7548077813b01fb75c6d1c88996c76b7faee2fe33dde991c58f565ccb1427ad5"
	I1124 03:17:13.062554  300247 cri.go:89] found id: "8afdc0c7272e751be59501860afc29e8a7a532529fda38a20c97838336ba67b7"
	I1124 03:17:13.062569  300247 cri.go:89] found id: "29d02ce914d44a4b8329512d1a23e85804b768d884d99b075e34cf6e14dac788"
	I1124 03:17:13.062573  300247 cri.go:89] found id: "9af2707445bb441233c326d9e3bf3966ba13a26a89b7254b24679f387b0b5bc0"
	I1124 03:17:13.062577  300247 cri.go:89] found id: "f1836e8795e70fdafa512b9e9d8802dc1dd32e3cbeed9fc49f65e08646f91bfc"
	I1124 03:17:13.062581  300247 cri.go:89] found id: "bcef0ab7ff5ee5c5f49176f121fab57276a8e6c2d6f1ad04f8043b241c47d281"
	I1124 03:17:13.062584  300247 cri.go:89] found id: "87ca81f329f9944d4aa8992fd39e8df84ad075a50b258501c5b36393d9b8490f"
	I1124 03:17:13.062602  300247 cri.go:89] found id: "e055a401fe670aa2c020065d05570646e820163cf83b991f8ce8c75cac7a531b"
	I1124 03:17:13.062615  300247 cri.go:89] found id: "40df276efacb02cb1f87ef6c0cd3435fa8719cfd11c9a1fcd645347d1bbf431d"
	I1124 03:17:13.062618  300247 cri.go:89] found id: "99c706b4665c8c75b900e2aafef4cb98d48a526abe0c79cf85a0c92d29c9bc8e"
	I1124 03:17:13.062622  300247 cri.go:89] found id: "e9e5ba99ab47bcb26e5d61dc6a04f4f4259bdc62676c462b2e122037b199d6dd"
	I1124 03:17:13.062625  300247 cri.go:89] found id: "4bf7144c7e3cbfd4ce77529d316b328c7ebd75692d7023e192f903097f99167f"
	I1124 03:17:13.062628  300247 cri.go:89] found id: "e0d73582da9fb4ef7041d1e171499fdca30fab4e8fa2887ad289666b528e8c73"
	I1124 03:17:13.062631  300247 cri.go:89] found id: "d731175eced009ea8d73a26124f854679fe6a04beaf289f7a3a7635ccfa7155a"
	I1124 03:17:13.062643  300247 cri.go:89] found id: "83cadb364a123d7398e5dd2800753acc7d78782c7f5f032824ec9d6dff61065d"
	I1124 03:17:13.062651  300247 cri.go:89] found id: "122b5b3da819c62b28e08a0e8f492efb939d921f5cc87f9f5dd30d4354f4c824"
	I1124 03:17:13.062656  300247 cri.go:89] found id: "8549937bedbf1ccf4f98e083e6dcbaa5697df18822718fd0d27b79e6e76f8942"
	I1124 03:17:13.062659  300247 cri.go:89] found id: "243940847a312c0327ab19daf598258d6deca45c2a27efb8e89e875e0d87c684"
	I1124 03:17:13.062662  300247 cri.go:89] found id: "78b45913b9ad783248086bdaf8ec66bb8abed626933418187bc0d43337f720ce"
	I1124 03:17:13.062678  300247 cri.go:89] found id: "338b16a84542c2f340889c798911ea671b465c1749548d747c2fff6223b15a89"
	I1124 03:17:13.062688  300247 cri.go:89] found id: "9d6da0d20171f4ca6805881f1293dd5146dfe8c43773853626d869062a6287b6"
	I1124 03:17:13.062692  300247 cri.go:89] found id: "44d60dc8fd9beb0e354e25ab604f6137163390956f8c04bf67e790db61625fb4"
	I1124 03:17:13.062695  300247 cri.go:89] found id: ""
	I1124 03:17:13.062766  300247 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:17:13.078066  300247 out.go:203] 
	W1124 03:17:13.080903  300247 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:17:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:17:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 03:17:13.080928  300247 out.go:285] * 
	* 
	W1124 03:17:13.086548  300247 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 03:17:13.089336  300247 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-153780 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.36s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1124 03:16:25.879959  291389 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1124 03:16:25.883906  291389 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1124 03:16:25.883932  291389 kapi.go:107] duration metric: took 3.983581ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.994354ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-153780 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-153780 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [3e35b203-63ec-4a61-862e-f5027e1bf54d] Pending
helpers_test.go:352: "task-pv-pod" [3e35b203-63ec-4a61-862e-f5027e1bf54d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [3e35b203-63ec-4a61-862e-f5027e1bf54d] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.00374224s
addons_test.go:572: (dbg) Run:  kubectl --context addons-153780 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-153780 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-153780 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-153780 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-153780 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-153780 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-153780 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [c370abaa-60e2-47ac-bfc3-87d4ed2c4ccd] Pending
helpers_test.go:352: "task-pv-pod-restore" [c370abaa-60e2-47ac-bfc3-87d4ed2c4ccd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [c370abaa-60e2-47ac-bfc3-87d4ed2c4ccd] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003481805s
addons_test.go:614: (dbg) Run:  kubectl --context addons-153780 delete pod task-pv-pod-restore
addons_test.go:614: (dbg) Done: kubectl --context addons-153780 delete pod task-pv-pod-restore: (1.521210975s)
addons_test.go:618: (dbg) Run:  kubectl --context addons-153780 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-153780 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-153780 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-153780 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (264.522346ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:17:17.053819  300681 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:17:17.054545  300681 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:17:17.054560  300681 out.go:374] Setting ErrFile to fd 2...
	I1124 03:17:17.054566  300681 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:17:17.054863  300681 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 03:17:17.055207  300681 mustload.go:66] Loading cluster: addons-153780
	I1124 03:17:17.055666  300681 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:17:17.055688  300681 addons.go:622] checking whether the cluster is paused
	I1124 03:17:17.055831  300681 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:17:17.055850  300681 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:17:17.056435  300681 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:17:17.073939  300681 ssh_runner.go:195] Run: systemctl --version
	I1124 03:17:17.073999  300681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:17:17.091372  300681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:17:17.200887  300681 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:17:17.200975  300681 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:17:17.233393  300681 cri.go:89] found id: "3485678af5d19bae1d4411bec346a97566826570b9acca73e66fcd515e5dbfdb"
	I1124 03:17:17.233416  300681 cri.go:89] found id: "7548077813b01fb75c6d1c88996c76b7faee2fe33dde991c58f565ccb1427ad5"
	I1124 03:17:17.233422  300681 cri.go:89] found id: "8afdc0c7272e751be59501860afc29e8a7a532529fda38a20c97838336ba67b7"
	I1124 03:17:17.233426  300681 cri.go:89] found id: "29d02ce914d44a4b8329512d1a23e85804b768d884d99b075e34cf6e14dac788"
	I1124 03:17:17.233429  300681 cri.go:89] found id: "9af2707445bb441233c326d9e3bf3966ba13a26a89b7254b24679f387b0b5bc0"
	I1124 03:17:17.233433  300681 cri.go:89] found id: "f1836e8795e70fdafa512b9e9d8802dc1dd32e3cbeed9fc49f65e08646f91bfc"
	I1124 03:17:17.233437  300681 cri.go:89] found id: "bcef0ab7ff5ee5c5f49176f121fab57276a8e6c2d6f1ad04f8043b241c47d281"
	I1124 03:17:17.233440  300681 cri.go:89] found id: "87ca81f329f9944d4aa8992fd39e8df84ad075a50b258501c5b36393d9b8490f"
	I1124 03:17:17.233443  300681 cri.go:89] found id: "e055a401fe670aa2c020065d05570646e820163cf83b991f8ce8c75cac7a531b"
	I1124 03:17:17.233455  300681 cri.go:89] found id: "40df276efacb02cb1f87ef6c0cd3435fa8719cfd11c9a1fcd645347d1bbf431d"
	I1124 03:17:17.233459  300681 cri.go:89] found id: "99c706b4665c8c75b900e2aafef4cb98d48a526abe0c79cf85a0c92d29c9bc8e"
	I1124 03:17:17.233462  300681 cri.go:89] found id: "e9e5ba99ab47bcb26e5d61dc6a04f4f4259bdc62676c462b2e122037b199d6dd"
	I1124 03:17:17.233466  300681 cri.go:89] found id: "4bf7144c7e3cbfd4ce77529d316b328c7ebd75692d7023e192f903097f99167f"
	I1124 03:17:17.233474  300681 cri.go:89] found id: "e0d73582da9fb4ef7041d1e171499fdca30fab4e8fa2887ad289666b528e8c73"
	I1124 03:17:17.233478  300681 cri.go:89] found id: "d731175eced009ea8d73a26124f854679fe6a04beaf289f7a3a7635ccfa7155a"
	I1124 03:17:17.233488  300681 cri.go:89] found id: "83cadb364a123d7398e5dd2800753acc7d78782c7f5f032824ec9d6dff61065d"
	I1124 03:17:17.233492  300681 cri.go:89] found id: "122b5b3da819c62b28e08a0e8f492efb939d921f5cc87f9f5dd30d4354f4c824"
	I1124 03:17:17.233496  300681 cri.go:89] found id: "8549937bedbf1ccf4f98e083e6dcbaa5697df18822718fd0d27b79e6e76f8942"
	I1124 03:17:17.233499  300681 cri.go:89] found id: "243940847a312c0327ab19daf598258d6deca45c2a27efb8e89e875e0d87c684"
	I1124 03:17:17.233502  300681 cri.go:89] found id: "78b45913b9ad783248086bdaf8ec66bb8abed626933418187bc0d43337f720ce"
	I1124 03:17:17.233507  300681 cri.go:89] found id: "338b16a84542c2f340889c798911ea671b465c1749548d747c2fff6223b15a89"
	I1124 03:17:17.233510  300681 cri.go:89] found id: "9d6da0d20171f4ca6805881f1293dd5146dfe8c43773853626d869062a6287b6"
	I1124 03:17:17.233513  300681 cri.go:89] found id: "44d60dc8fd9beb0e354e25ab604f6137163390956f8c04bf67e790db61625fb4"
	I1124 03:17:17.233516  300681 cri.go:89] found id: ""
	I1124 03:17:17.233583  300681 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:17:17.248584  300681 out.go:203] 
	W1124 03:17:17.251720  300681 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:17:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:17:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 03:17:17.251751  300681 out.go:285] * 
	* 
	W1124 03:17:17.257362  300681 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 03:17:17.260240  300681 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-153780 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-153780 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-153780 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (257.078854ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:17:17.315813  300724 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:17:17.316586  300724 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:17:17.316601  300724 out.go:374] Setting ErrFile to fd 2...
	I1124 03:17:17.316607  300724 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:17:17.316879  300724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 03:17:17.317174  300724 mustload.go:66] Loading cluster: addons-153780
	I1124 03:17:17.317591  300724 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:17:17.317608  300724 addons.go:622] checking whether the cluster is paused
	I1124 03:17:17.317724  300724 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:17:17.317742  300724 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:17:17.318262  300724 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:17:17.335219  300724 ssh_runner.go:195] Run: systemctl --version
	I1124 03:17:17.335284  300724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:17:17.353123  300724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:17:17.460825  300724 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:17:17.460909  300724 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:17:17.490589  300724 cri.go:89] found id: "3485678af5d19bae1d4411bec346a97566826570b9acca73e66fcd515e5dbfdb"
	I1124 03:17:17.490660  300724 cri.go:89] found id: "7548077813b01fb75c6d1c88996c76b7faee2fe33dde991c58f565ccb1427ad5"
	I1124 03:17:17.490680  300724 cri.go:89] found id: "8afdc0c7272e751be59501860afc29e8a7a532529fda38a20c97838336ba67b7"
	I1124 03:17:17.490705  300724 cri.go:89] found id: "29d02ce914d44a4b8329512d1a23e85804b768d884d99b075e34cf6e14dac788"
	I1124 03:17:17.490734  300724 cri.go:89] found id: "9af2707445bb441233c326d9e3bf3966ba13a26a89b7254b24679f387b0b5bc0"
	I1124 03:17:17.490759  300724 cri.go:89] found id: "f1836e8795e70fdafa512b9e9d8802dc1dd32e3cbeed9fc49f65e08646f91bfc"
	I1124 03:17:17.490783  300724 cri.go:89] found id: "bcef0ab7ff5ee5c5f49176f121fab57276a8e6c2d6f1ad04f8043b241c47d281"
	I1124 03:17:17.490805  300724 cri.go:89] found id: "87ca81f329f9944d4aa8992fd39e8df84ad075a50b258501c5b36393d9b8490f"
	I1124 03:17:17.490836  300724 cri.go:89] found id: "e055a401fe670aa2c020065d05570646e820163cf83b991f8ce8c75cac7a531b"
	I1124 03:17:17.490862  300724 cri.go:89] found id: "40df276efacb02cb1f87ef6c0cd3435fa8719cfd11c9a1fcd645347d1bbf431d"
	I1124 03:17:17.490883  300724 cri.go:89] found id: "99c706b4665c8c75b900e2aafef4cb98d48a526abe0c79cf85a0c92d29c9bc8e"
	I1124 03:17:17.490905  300724 cri.go:89] found id: "e9e5ba99ab47bcb26e5d61dc6a04f4f4259bdc62676c462b2e122037b199d6dd"
	I1124 03:17:17.490924  300724 cri.go:89] found id: "4bf7144c7e3cbfd4ce77529d316b328c7ebd75692d7023e192f903097f99167f"
	I1124 03:17:17.490957  300724 cri.go:89] found id: "e0d73582da9fb4ef7041d1e171499fdca30fab4e8fa2887ad289666b528e8c73"
	I1124 03:17:17.490975  300724 cri.go:89] found id: "d731175eced009ea8d73a26124f854679fe6a04beaf289f7a3a7635ccfa7155a"
	I1124 03:17:17.490999  300724 cri.go:89] found id: "83cadb364a123d7398e5dd2800753acc7d78782c7f5f032824ec9d6dff61065d"
	I1124 03:17:17.491048  300724 cri.go:89] found id: "122b5b3da819c62b28e08a0e8f492efb939d921f5cc87f9f5dd30d4354f4c824"
	I1124 03:17:17.491074  300724 cri.go:89] found id: "8549937bedbf1ccf4f98e083e6dcbaa5697df18822718fd0d27b79e6e76f8942"
	I1124 03:17:17.491095  300724 cri.go:89] found id: "243940847a312c0327ab19daf598258d6deca45c2a27efb8e89e875e0d87c684"
	I1124 03:17:17.491118  300724 cri.go:89] found id: "78b45913b9ad783248086bdaf8ec66bb8abed626933418187bc0d43337f720ce"
	I1124 03:17:17.491152  300724 cri.go:89] found id: "338b16a84542c2f340889c798911ea671b465c1749548d747c2fff6223b15a89"
	I1124 03:17:17.491174  300724 cri.go:89] found id: "9d6da0d20171f4ca6805881f1293dd5146dfe8c43773853626d869062a6287b6"
	I1124 03:17:17.491191  300724 cri.go:89] found id: "44d60dc8fd9beb0e354e25ab604f6137163390956f8c04bf67e790db61625fb4"
	I1124 03:17:17.491211  300724 cri.go:89] found id: ""
	I1124 03:17:17.491287  300724 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:17:17.506172  300724 out.go:203] 
	W1124 03:17:17.509084  300724 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:17:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:17:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 03:17:17.509110  300724 out.go:285] * 
	* 
	W1124 03:17:17.515109  300724 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 03:17:17.518302  300724 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-153780 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (51.65s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (4.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-153780 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-153780 --alsologtostderr -v=1: exit status 11 (256.474502ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:16:57.278389  299528 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:16:57.279253  299528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:16:57.279269  299528 out.go:374] Setting ErrFile to fd 2...
	I1124 03:16:57.279275  299528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:16:57.279593  299528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 03:16:57.279936  299528 mustload.go:66] Loading cluster: addons-153780
	I1124 03:16:57.280363  299528 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:16:57.280385  299528 addons.go:622] checking whether the cluster is paused
	I1124 03:16:57.280531  299528 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:16:57.280550  299528 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:16:57.281098  299528 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:16:57.299039  299528 ssh_runner.go:195] Run: systemctl --version
	I1124 03:16:57.299106  299528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:16:57.315821  299528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:16:57.417004  299528 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:16:57.417120  299528 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:16:57.448143  299528 cri.go:89] found id: "3485678af5d19bae1d4411bec346a97566826570b9acca73e66fcd515e5dbfdb"
	I1124 03:16:57.448185  299528 cri.go:89] found id: "7548077813b01fb75c6d1c88996c76b7faee2fe33dde991c58f565ccb1427ad5"
	I1124 03:16:57.448191  299528 cri.go:89] found id: "8afdc0c7272e751be59501860afc29e8a7a532529fda38a20c97838336ba67b7"
	I1124 03:16:57.448196  299528 cri.go:89] found id: "29d02ce914d44a4b8329512d1a23e85804b768d884d99b075e34cf6e14dac788"
	I1124 03:16:57.448199  299528 cri.go:89] found id: "9af2707445bb441233c326d9e3bf3966ba13a26a89b7254b24679f387b0b5bc0"
	I1124 03:16:57.448203  299528 cri.go:89] found id: "f1836e8795e70fdafa512b9e9d8802dc1dd32e3cbeed9fc49f65e08646f91bfc"
	I1124 03:16:57.448238  299528 cri.go:89] found id: "bcef0ab7ff5ee5c5f49176f121fab57276a8e6c2d6f1ad04f8043b241c47d281"
	I1124 03:16:57.448242  299528 cri.go:89] found id: "87ca81f329f9944d4aa8992fd39e8df84ad075a50b258501c5b36393d9b8490f"
	I1124 03:16:57.448245  299528 cri.go:89] found id: "e055a401fe670aa2c020065d05570646e820163cf83b991f8ce8c75cac7a531b"
	I1124 03:16:57.448251  299528 cri.go:89] found id: "40df276efacb02cb1f87ef6c0cd3435fa8719cfd11c9a1fcd645347d1bbf431d"
	I1124 03:16:57.448259  299528 cri.go:89] found id: "99c706b4665c8c75b900e2aafef4cb98d48a526abe0c79cf85a0c92d29c9bc8e"
	I1124 03:16:57.448262  299528 cri.go:89] found id: "e9e5ba99ab47bcb26e5d61dc6a04f4f4259bdc62676c462b2e122037b199d6dd"
	I1124 03:16:57.448266  299528 cri.go:89] found id: "4bf7144c7e3cbfd4ce77529d316b328c7ebd75692d7023e192f903097f99167f"
	I1124 03:16:57.448270  299528 cri.go:89] found id: "e0d73582da9fb4ef7041d1e171499fdca30fab4e8fa2887ad289666b528e8c73"
	I1124 03:16:57.448273  299528 cri.go:89] found id: "d731175eced009ea8d73a26124f854679fe6a04beaf289f7a3a7635ccfa7155a"
	I1124 03:16:57.448278  299528 cri.go:89] found id: "83cadb364a123d7398e5dd2800753acc7d78782c7f5f032824ec9d6dff61065d"
	I1124 03:16:57.448284  299528 cri.go:89] found id: "122b5b3da819c62b28e08a0e8f492efb939d921f5cc87f9f5dd30d4354f4c824"
	I1124 03:16:57.448301  299528 cri.go:89] found id: "8549937bedbf1ccf4f98e083e6dcbaa5697df18822718fd0d27b79e6e76f8942"
	I1124 03:16:57.448306  299528 cri.go:89] found id: "243940847a312c0327ab19daf598258d6deca45c2a27efb8e89e875e0d87c684"
	I1124 03:16:57.448309  299528 cri.go:89] found id: "78b45913b9ad783248086bdaf8ec66bb8abed626933418187bc0d43337f720ce"
	I1124 03:16:57.448316  299528 cri.go:89] found id: "338b16a84542c2f340889c798911ea671b465c1749548d747c2fff6223b15a89"
	I1124 03:16:57.448326  299528 cri.go:89] found id: "9d6da0d20171f4ca6805881f1293dd5146dfe8c43773853626d869062a6287b6"
	I1124 03:16:57.448329  299528 cri.go:89] found id: "44d60dc8fd9beb0e354e25ab604f6137163390956f8c04bf67e790db61625fb4"
	I1124 03:16:57.448332  299528 cri.go:89] found id: ""
	I1124 03:16:57.448404  299528 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:16:57.463997  299528 out.go:203] 
	W1124 03:16:57.467075  299528 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:16:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:16:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 03:16:57.467103  299528 out.go:285] * 
	* 
	W1124 03:16:57.472593  299528 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 03:16:57.475589  299528 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-153780 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-153780
helpers_test.go:243: (dbg) docker inspect addons-153780:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c475d9049df5b92be1c834e60dc55fabb2e8bfcb838c569945730e8173f565a4",
	        "Created": "2025-11-24T03:13:54.24845116Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 292550,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:13:54.330497265Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fbb44bc62521f331457dff002aaa5e1e27856f9e53853b3b3ee62969be454028",
	        "ResolvConfPath": "/var/lib/docker/containers/c475d9049df5b92be1c834e60dc55fabb2e8bfcb838c569945730e8173f565a4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c475d9049df5b92be1c834e60dc55fabb2e8bfcb838c569945730e8173f565a4/hostname",
	        "HostsPath": "/var/lib/docker/containers/c475d9049df5b92be1c834e60dc55fabb2e8bfcb838c569945730e8173f565a4/hosts",
	        "LogPath": "/var/lib/docker/containers/c475d9049df5b92be1c834e60dc55fabb2e8bfcb838c569945730e8173f565a4/c475d9049df5b92be1c834e60dc55fabb2e8bfcb838c569945730e8173f565a4-json.log",
	        "Name": "/addons-153780",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-153780:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-153780",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c475d9049df5b92be1c834e60dc55fabb2e8bfcb838c569945730e8173f565a4",
	                "LowerDir": "/var/lib/docker/overlay2/4aca70ce84ed29d2d22fb2bea7d783140df107a3524b3dd95ff3f84cfb14e5e7-init/diff:/var/lib/docker/overlay2/d0aa28be488ed1454a92ae6ce1d851cbc3e880fd8a322d63788b1835a346ec13/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4aca70ce84ed29d2d22fb2bea7d783140df107a3524b3dd95ff3f84cfb14e5e7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4aca70ce84ed29d2d22fb2bea7d783140df107a3524b3dd95ff3f84cfb14e5e7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4aca70ce84ed29d2d22fb2bea7d783140df107a3524b3dd95ff3f84cfb14e5e7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-153780",
	                "Source": "/var/lib/docker/volumes/addons-153780/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-153780",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-153780",
	                "name.minikube.sigs.k8s.io": "addons-153780",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ad8f7a9eeef4c985a500846f0c83191d6fd3bc91a84be2fb79d9eed270839d12",
	            "SandboxKey": "/var/run/docker/netns/ad8f7a9eeef4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-153780": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:3c:a7:5d:18:85",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e84e42d10d667ce334546571b9e3511d266786293c95c8dc2a2fc672a60a2b37",
	                    "EndpointID": "d95674ecb10592bfb7689e8f3aa162b82325860a3fb998e0677bc272216e4a5f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-153780",
	                        "c475d9049df5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-153780 -n addons-153780
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-153780 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-153780 logs -n 25: (1.56080676s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-738458 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-738458   │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ delete  │ -p download-only-738458                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-738458   │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ start   │ -o=json --download-only -p download-only-946785 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-946785   │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ delete  │ -p download-only-946785                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-946785   │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ delete  │ -p download-only-738458                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-738458   │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ delete  │ -p download-only-946785                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-946785   │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ start   │ --download-only -p download-docker-545793 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-545793 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ delete  │ -p download-docker-545793                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-545793 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ start   │ --download-only -p binary-mirror-193578 --alsologtostderr --binary-mirror http://127.0.0.1:46679 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-193578   │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ delete  │ -p binary-mirror-193578                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-193578   │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ addons  │ enable dashboard -p addons-153780                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ addons  │ disable dashboard -p addons-153780                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ start   │ -p addons-153780 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:16 UTC │
	│ addons  │ addons-153780 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:16 UTC │                     │
	│ addons  │ addons-153780 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:16 UTC │                     │
	│ addons  │ addons-153780 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:16 UTC │                     │
	│ ip      │ addons-153780 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:16 UTC │ 24 Nov 25 03:16 UTC │
	│ addons  │ addons-153780 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:16 UTC │                     │
	│ addons  │ addons-153780 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:16 UTC │                     │
	│ ssh     │ addons-153780 ssh cat /opt/local-path-provisioner/pvc-eefc238c-13a7-4139-bcbc-502e91e6b046_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:16 UTC │ 24 Nov 25 03:16 UTC │
	│ addons  │ addons-153780 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:16 UTC │                     │
	│ addons  │ addons-153780 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:16 UTC │                     │
	│ addons  │ enable headlamp -p addons-153780 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-153780          │ jenkins │ v1.37.0 │ 24 Nov 25 03:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:13:29
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:13:29.428779  292146 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:13:29.428910  292146 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:13:29.428919  292146 out.go:374] Setting ErrFile to fd 2...
	I1124 03:13:29.428924  292146 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:13:29.429160  292146 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 03:13:29.429589  292146 out.go:368] Setting JSON to false
	I1124 03:13:29.430386  292146 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6939,"bootTime":1763947071,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 03:13:29.430486  292146 start.go:143] virtualization:  
	I1124 03:13:29.433962  292146 out.go:179] * [addons-153780] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 03:13:29.436912  292146 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:13:29.436984  292146 notify.go:221] Checking for updates...
	I1124 03:13:29.442864  292146 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:13:29.445782  292146 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 03:13:29.448806  292146 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	I1124 03:13:29.452187  292146 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 03:13:29.455128  292146 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:13:29.458295  292146 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:13:29.492101  292146 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 03:13:29.492238  292146 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:13:29.551207  292146 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-24 03:13:29.541297467 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:13:29.551321  292146 docker.go:319] overlay module found
	I1124 03:13:29.554439  292146 out.go:179] * Using the docker driver based on user configuration
	I1124 03:13:29.557401  292146 start.go:309] selected driver: docker
	I1124 03:13:29.557425  292146 start.go:927] validating driver "docker" against <nil>
	I1124 03:13:29.557439  292146 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:13:29.558181  292146 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:13:29.618873  292146 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-24 03:13:29.610067952 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:13:29.619025  292146 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 03:13:29.619244  292146 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:13:29.622312  292146 out.go:179] * Using Docker driver with root privileges
	I1124 03:13:29.625082  292146 cni.go:84] Creating CNI manager for ""
	I1124 03:13:29.625156  292146 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:13:29.625169  292146 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 03:13:29.625257  292146 start.go:353] cluster config:
	{Name:addons-153780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-153780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1124 03:13:29.628224  292146 out.go:179] * Starting "addons-153780" primary control-plane node in "addons-153780" cluster
	I1124 03:13:29.630962  292146 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 03:13:29.633898  292146 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:13:29.636661  292146 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:13:29.636707  292146 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 03:13:29.636720  292146 cache.go:65] Caching tarball of preloaded images
	I1124 03:13:29.636730  292146 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:13:29.636805  292146 preload.go:238] Found /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 03:13:29.636816  292146 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 03:13:29.637165  292146 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/config.json ...
	I1124 03:13:29.637195  292146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/config.json: {Name:mk8d9952a307787a3248d1e4288b64c24558edda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:29.652357  292146 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 to local cache
	I1124 03:13:29.652507  292146 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local cache directory
	I1124 03:13:29.652526  292146 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local cache directory, skipping pull
	I1124 03:13:29.652531  292146 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in cache, skipping pull
	I1124 03:13:29.652538  292146 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 as a tarball
	I1124 03:13:29.652543  292146 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 from local cache
	I1124 03:13:47.786699  292146 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 from cached tarball
	I1124 03:13:47.786737  292146 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:13:47.786796  292146 start.go:360] acquireMachinesLock for addons-153780: {Name:mk35d609c14454834f274f9197604c5ae01b8f37 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:13:47.786933  292146 start.go:364] duration metric: took 113.651µs to acquireMachinesLock for "addons-153780"
	I1124 03:13:47.786965  292146 start.go:93] Provisioning new machine with config: &{Name:addons-153780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-153780 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:13:47.787035  292146 start.go:125] createHost starting for "" (driver="docker")
	I1124 03:13:47.790473  292146 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1124 03:13:47.790727  292146 start.go:159] libmachine.API.Create for "addons-153780" (driver="docker")
	I1124 03:13:47.790765  292146 client.go:173] LocalClient.Create starting
	I1124 03:13:47.790880  292146 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem
	I1124 03:13:47.870858  292146 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem
	I1124 03:13:48.102559  292146 cli_runner.go:164] Run: docker network inspect addons-153780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 03:13:48.118561  292146 cli_runner.go:211] docker network inspect addons-153780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 03:13:48.118660  292146 network_create.go:284] running [docker network inspect addons-153780] to gather additional debugging logs...
	I1124 03:13:48.118683  292146 cli_runner.go:164] Run: docker network inspect addons-153780
	W1124 03:13:48.134910  292146 cli_runner.go:211] docker network inspect addons-153780 returned with exit code 1
	I1124 03:13:48.134942  292146 network_create.go:287] error running [docker network inspect addons-153780]: docker network inspect addons-153780: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-153780 not found
	I1124 03:13:48.134957  292146 network_create.go:289] output of [docker network inspect addons-153780]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-153780 not found
	
	** /stderr **
	I1124 03:13:48.135061  292146 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:13:48.152503  292146 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019ce200}
	I1124 03:13:48.152553  292146 network_create.go:124] attempt to create docker network addons-153780 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1124 03:13:48.152609  292146 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-153780 addons-153780
	I1124 03:13:48.208248  292146 network_create.go:108] docker network addons-153780 192.168.49.0/24 created
	I1124 03:13:48.208278  292146 kic.go:121] calculated static IP "192.168.49.2" for the "addons-153780" container
	I1124 03:13:48.208353  292146 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 03:13:48.233551  292146 cli_runner.go:164] Run: docker volume create addons-153780 --label name.minikube.sigs.k8s.io=addons-153780 --label created_by.minikube.sigs.k8s.io=true
	I1124 03:13:48.252534  292146 oci.go:103] Successfully created a docker volume addons-153780
	I1124 03:13:48.252643  292146 cli_runner.go:164] Run: docker run --rm --name addons-153780-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-153780 --entrypoint /usr/bin/test -v addons-153780:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 03:13:49.711168  292146 cli_runner.go:217] Completed: docker run --rm --name addons-153780-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-153780 --entrypoint /usr/bin/test -v addons-153780:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib: (1.458489124s)
	I1124 03:13:49.711199  292146 oci.go:107] Successfully prepared a docker volume addons-153780
	I1124 03:13:49.711241  292146 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:13:49.711252  292146 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 03:13:49.711321  292146 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-153780:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 03:13:54.181763  292146 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-153780:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (4.47040478s)
	I1124 03:13:54.181806  292146 kic.go:203] duration metric: took 4.470549429s to extract preloaded images to volume ...
	W1124 03:13:54.181935  292146 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 03:13:54.182055  292146 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 03:13:54.234586  292146 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-153780 --name addons-153780 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-153780 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-153780 --network addons-153780 --ip 192.168.49.2 --volume addons-153780:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 03:13:54.539636  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Running}}
	I1124 03:13:54.558502  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:13:54.583723  292146 cli_runner.go:164] Run: docker exec addons-153780 stat /var/lib/dpkg/alternatives/iptables
	I1124 03:13:54.638383  292146 oci.go:144] the created container "addons-153780" has a running status.
	I1124 03:13:54.638412  292146 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa...
	I1124 03:13:54.871810  292146 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 03:13:54.896101  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:13:54.914672  292146 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 03:13:54.914692  292146 kic_runner.go:114] Args: [docker exec --privileged addons-153780 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 03:13:54.985133  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:13:55.021668  292146 machine.go:94] provisionDockerMachine start ...
	I1124 03:13:55.021782  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:13:55.052466  292146 main.go:143] libmachine: Using SSH client type: native
	I1124 03:13:55.052789  292146 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1124 03:13:55.052798  292146 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:13:55.053664  292146 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51594->127.0.0.1:33139: read: connection reset by peer
	I1124 03:13:58.205815  292146 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-153780
	
	I1124 03:13:58.205842  292146 ubuntu.go:182] provisioning hostname "addons-153780"
	I1124 03:13:58.205909  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:13:58.224250  292146 main.go:143] libmachine: Using SSH client type: native
	I1124 03:13:58.224573  292146 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1124 03:13:58.224588  292146 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-153780 && echo "addons-153780" | sudo tee /etc/hostname
	I1124 03:13:58.379693  292146 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-153780
	
	I1124 03:13:58.379767  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:13:58.397728  292146 main.go:143] libmachine: Using SSH client type: native
	I1124 03:13:58.398081  292146 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1124 03:13:58.398098  292146 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-153780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-153780/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-153780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:13:58.546661  292146 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:13:58.546700  292146 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-289526/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-289526/.minikube}
	I1124 03:13:58.546723  292146 ubuntu.go:190] setting up certificates
	I1124 03:13:58.546732  292146 provision.go:84] configureAuth start
	I1124 03:13:58.546792  292146 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-153780
	I1124 03:13:58.566446  292146 provision.go:143] copyHostCerts
	I1124 03:13:58.566562  292146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem (1082 bytes)
	I1124 03:13:58.566681  292146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem (1123 bytes)
	I1124 03:13:58.566732  292146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem (1675 bytes)
	I1124 03:13:58.566802  292146 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem org=jenkins.addons-153780 san=[127.0.0.1 192.168.49.2 addons-153780 localhost minikube]
	I1124 03:13:58.728647  292146 provision.go:177] copyRemoteCerts
	I1124 03:13:58.728718  292146 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:13:58.728757  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:13:58.745514  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:13:58.846114  292146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:13:58.863200  292146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:13:58.880979  292146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1124 03:13:58.898579  292146 provision.go:87] duration metric: took 351.823085ms to configureAuth
	I1124 03:13:58.898606  292146 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:13:58.898851  292146 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:13:58.898987  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:13:58.915320  292146 main.go:143] libmachine: Using SSH client type: native
	I1124 03:13:58.915626  292146 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1124 03:13:58.915643  292146 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:13:59.230496  292146 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:13:59.230570  292146 machine.go:97] duration metric: took 4.208874429s to provisionDockerMachine
	I1124 03:13:59.230605  292146 client.go:176] duration metric: took 11.439829028s to LocalClient.Create
	I1124 03:13:59.230661  292146 start.go:167] duration metric: took 11.439934843s to libmachine.API.Create "addons-153780"
	I1124 03:13:59.230688  292146 start.go:293] postStartSetup for "addons-153780" (driver="docker")
	I1124 03:13:59.230726  292146 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:13:59.230820  292146 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:13:59.230930  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:13:59.247844  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:13:59.350438  292146 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:13:59.353811  292146 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:13:59.353841  292146 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:13:59.353868  292146 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/addons for local assets ...
	I1124 03:13:59.353952  292146 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/files for local assets ...
	I1124 03:13:59.353994  292146 start.go:296] duration metric: took 123.28567ms for postStartSetup
	I1124 03:13:59.354321  292146 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-153780
	I1124 03:13:59.371765  292146 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/config.json ...
	I1124 03:13:59.372047  292146 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:13:59.372097  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:13:59.388841  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:13:59.487479  292146 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:13:59.493008  292146 start.go:128] duration metric: took 11.705958004s to createHost
	I1124 03:13:59.493038  292146 start.go:83] releasing machines lock for "addons-153780", held for 11.706089008s
	I1124 03:13:59.493120  292146 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-153780
	I1124 03:13:59.509938  292146 ssh_runner.go:195] Run: cat /version.json
	I1124 03:13:59.510000  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:13:59.510026  292146 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:13:59.510081  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:13:59.531640  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:13:59.534604  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:13:59.719986  292146 ssh_runner.go:195] Run: systemctl --version
	I1124 03:13:59.727082  292146 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:13:59.764920  292146 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:13:59.769189  292146 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:13:59.769273  292146 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:13:59.796878  292146 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 03:13:59.796913  292146 start.go:496] detecting cgroup driver to use...
	I1124 03:13:59.796946  292146 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 03:13:59.796997  292146 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:13:59.814586  292146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:13:59.827496  292146 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:13:59.827561  292146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:13:59.844852  292146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:13:59.863634  292146 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:13:59.988001  292146 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:14:00.317634  292146 docker.go:234] disabling docker service ...
	I1124 03:14:00.317742  292146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:14:00.354122  292146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:14:00.371621  292146 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:14:00.501237  292146 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:14:00.626977  292146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:14:00.640382  292146 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:14:00.654937  292146 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:14:00.655027  292146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:14:00.663955  292146 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 03:14:00.664025  292146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:14:00.673055  292146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:14:00.682220  292146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:14:00.691352  292146 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:14:00.701103  292146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:14:00.710097  292146 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:14:00.723784  292146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:14:00.732361  292146 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:14:00.739950  292146 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:14:00.747816  292146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:14:00.877425  292146 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:14:01.059664  292146 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:14:01.059761  292146 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:14:01.063869  292146 start.go:564] Will wait 60s for crictl version
	I1124 03:14:01.063994  292146 ssh_runner.go:195] Run: which crictl
	I1124 03:14:01.067723  292146 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:14:01.094048  292146 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:14:01.094224  292146 ssh_runner.go:195] Run: crio --version
	I1124 03:14:01.125950  292146 ssh_runner.go:195] Run: crio --version
	I1124 03:14:01.161168  292146 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:14:01.164074  292146 cli_runner.go:164] Run: docker network inspect addons-153780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:14:01.181261  292146 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1124 03:14:01.185815  292146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:14:01.197645  292146 kubeadm.go:884] updating cluster {Name:addons-153780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-153780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:14:01.197770  292146 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:14:01.197832  292146 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:14:01.233157  292146 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:14:01.233184  292146 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:14:01.233240  292146 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:14:01.260629  292146 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:14:01.260655  292146 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:14:01.260663  292146 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1124 03:14:01.260758  292146 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-153780 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-153780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:14:01.260844  292146 ssh_runner.go:195] Run: crio config
	I1124 03:14:01.333752  292146 cni.go:84] Creating CNI manager for ""
	I1124 03:14:01.333817  292146 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:14:01.333858  292146 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:14:01.333911  292146 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-153780 NodeName:addons-153780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:14:01.334115  292146 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-153780"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:14:01.334233  292146 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:14:01.342459  292146 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:14:01.342590  292146 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:14:01.350328  292146 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1124 03:14:01.363617  292146 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:14:01.376720  292146 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1124 03:14:01.390411  292146 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:14:01.394171  292146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:14:01.403932  292146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:14:01.522107  292146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:14:01.537430  292146 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780 for IP: 192.168.49.2
	I1124 03:14:01.537500  292146 certs.go:195] generating shared ca certs ...
	I1124 03:14:01.537532  292146 certs.go:227] acquiring lock for ca certs: {Name:mk13d1a72b5c7901cf99d88e558f62c6b8512807 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:01.537736  292146 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key
	I1124 03:14:02.493979  292146 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt ...
	I1124 03:14:02.494014  292146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt: {Name:mk226cdfc793e85d0a3112b814b9be095b5ed993 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:02.494274  292146 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key ...
	I1124 03:14:02.494291  292146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key: {Name:mkdb31c096e2ce62729da2c9c4457652a692de4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:02.494385  292146 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key
	I1124 03:14:02.641646  292146 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.crt ...
	I1124 03:14:02.641675  292146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.crt: {Name:mka4de975327d77cfeb05706ee704457ea7ab8ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:02.641846  292146 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key ...
	I1124 03:14:02.641860  292146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key: {Name:mkcb375acb16a1cfd2c844cf4167c1342ebaf3be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:02.641941  292146 certs.go:257] generating profile certs ...
	I1124 03:14:02.642009  292146 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.key
	I1124 03:14:02.642024  292146 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt with IP's: []
	I1124 03:14:02.778812  292146 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt ...
	I1124 03:14:02.778847  292146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: {Name:mk834a93ff488a7958ff2898bbc70e2dc8d763db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:02.779026  292146 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.key ...
	I1124 03:14:02.779041  292146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.key: {Name:mkd956f653538a237e5b9f5f7ab8997897f2f672 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:02.779123  292146 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/apiserver.key.6249716f
	I1124 03:14:02.779146  292146 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/apiserver.crt.6249716f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1124 03:14:03.490148  292146 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/apiserver.crt.6249716f ...
	I1124 03:14:03.490181  292146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/apiserver.crt.6249716f: {Name:mkaa045e816925e18b14c782038ccf8c377c3849 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:03.490366  292146 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/apiserver.key.6249716f ...
	I1124 03:14:03.490380  292146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/apiserver.key.6249716f: {Name:mkb10a71376e57f7735da7ed37052f88f0797d3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:03.490485  292146 certs.go:382] copying /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/apiserver.crt.6249716f -> /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/apiserver.crt
	I1124 03:14:03.490565  292146 certs.go:386] copying /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/apiserver.key.6249716f -> /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/apiserver.key
	I1124 03:14:03.490619  292146 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/proxy-client.key
	I1124 03:14:03.490640  292146 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/proxy-client.crt with IP's: []
	I1124 03:14:03.692027  292146 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/proxy-client.crt ...
	I1124 03:14:03.692061  292146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/proxy-client.crt: {Name:mk287a247827b8c2fd1687dc3f4b741f4f06a696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:03.692247  292146 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/proxy-client.key ...
	I1124 03:14:03.692263  292146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/proxy-client.key: {Name:mked341a3485b5677508e9324292afb4093d7fe6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:03.692457  292146 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:14:03.692504  292146 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:14:03.692535  292146 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:14:03.692568  292146 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem (1675 bytes)
	I1124 03:14:03.693177  292146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:14:03.711166  292146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 03:14:03.732299  292146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:14:03.751029  292146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:14:03.769202  292146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1124 03:14:03.786957  292146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:14:03.804226  292146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:14:03.822109  292146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:14:03.839976  292146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:14:03.858120  292146 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:14:03.871005  292146 ssh_runner.go:195] Run: openssl version
	I1124 03:14:03.877461  292146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:14:03.886534  292146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:14:03.891093  292146 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 03:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:14:03.891234  292146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:14:03.936237  292146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:14:03.945108  292146 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:14:03.948824  292146 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:14:03.948894  292146 kubeadm.go:401] StartCluster: {Name:addons-153780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-153780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:14:03.949000  292146 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:14:03.949070  292146 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:14:03.977538  292146 cri.go:89] found id: ""
	I1124 03:14:03.977660  292146 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:14:03.985515  292146 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:14:03.993182  292146 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:14:03.993291  292146 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:14:04.002603  292146 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:14:04.002691  292146 kubeadm.go:158] found existing configuration files:
	
	I1124 03:14:04.002780  292146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:14:04.012431  292146 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:14:04.012501  292146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:14:04.020530  292146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:14:04.028903  292146 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:14:04.028989  292146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:14:04.037195  292146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:14:04.045640  292146 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:14:04.045760  292146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:14:04.053583  292146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:14:04.061633  292146 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:14:04.061753  292146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:14:04.070029  292146 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:14:04.136691  292146 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 03:14:04.136950  292146 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 03:14:04.206136  292146 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:14:21.034026  292146 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:14:21.034087  292146 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:14:21.034176  292146 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:14:21.034235  292146 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 03:14:21.034273  292146 kubeadm.go:319] OS: Linux
	I1124 03:14:21.034321  292146 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:14:21.034373  292146 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 03:14:21.034423  292146 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:14:21.034519  292146 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:14:21.034573  292146 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:14:21.034624  292146 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:14:21.034672  292146 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:14:21.034720  292146 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:14:21.034766  292146 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 03:14:21.034838  292146 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:14:21.034931  292146 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:14:21.035021  292146 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:14:21.035083  292146 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 03:14:21.038169  292146 out.go:252]   - Generating certificates and keys ...
	I1124 03:14:21.038270  292146 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:14:21.038344  292146 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:14:21.038420  292146 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:14:21.038509  292146 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:14:21.038576  292146 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:14:21.038659  292146 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:14:21.038718  292146 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:14:21.038839  292146 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-153780 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1124 03:14:21.038896  292146 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:14:21.039015  292146 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-153780 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1124 03:14:21.039085  292146 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:14:21.039152  292146 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:14:21.039200  292146 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:14:21.039259  292146 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:14:21.039314  292146 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:14:21.039385  292146 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:14:21.039448  292146 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:14:21.039516  292146 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:14:21.039574  292146 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:14:21.039659  292146 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:14:21.039729  292146 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:14:21.042702  292146 out.go:252]   - Booting up control plane ...
	I1124 03:14:21.042906  292146 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:14:21.043004  292146 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:14:21.043077  292146 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:14:21.043211  292146 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:14:21.043352  292146 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:14:21.043475  292146 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:14:21.043632  292146 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:14:21.043691  292146 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:14:21.043874  292146 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:14:21.044012  292146 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:14:21.044083  292146 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002054542s
	I1124 03:14:21.044182  292146 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:14:21.044276  292146 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1124 03:14:21.044427  292146 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:14:21.044551  292146 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:14:21.044641  292146 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.534236431s
	I1124 03:14:21.044717  292146 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.657687708s
	I1124 03:14:21.044803  292146 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.503041608s
	I1124 03:14:21.044920  292146 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:14:21.045055  292146 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:14:21.045133  292146 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:14:21.045405  292146 kubeadm.go:319] [mark-control-plane] Marking the node addons-153780 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:14:21.045490  292146 kubeadm.go:319] [bootstrap-token] Using token: 1ng5of.h0h75cft7s8kvxk0
	I1124 03:14:21.048772  292146 out.go:252]   - Configuring RBAC rules ...
	I1124 03:14:21.048937  292146 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:14:21.049074  292146 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:14:21.049287  292146 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:14:21.049471  292146 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:14:21.049608  292146 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:14:21.049729  292146 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:14:21.049881  292146 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:14:21.049955  292146 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:14:21.050032  292146 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:14:21.050063  292146 kubeadm.go:319] 
	I1124 03:14:21.050160  292146 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:14:21.050172  292146 kubeadm.go:319] 
	I1124 03:14:21.050255  292146 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:14:21.050266  292146 kubeadm.go:319] 
	I1124 03:14:21.050293  292146 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:14:21.050377  292146 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:14:21.050440  292146 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:14:21.050584  292146 kubeadm.go:319] 
	I1124 03:14:21.050641  292146 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:14:21.050644  292146 kubeadm.go:319] 
	I1124 03:14:21.050700  292146 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:14:21.050712  292146 kubeadm.go:319] 
	I1124 03:14:21.050767  292146 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:14:21.050854  292146 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:14:21.050936  292146 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:14:21.050945  292146 kubeadm.go:319] 
	I1124 03:14:21.051030  292146 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:14:21.051115  292146 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:14:21.051123  292146 kubeadm.go:319] 
	I1124 03:14:21.051213  292146 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 1ng5of.h0h75cft7s8kvxk0 \
	I1124 03:14:21.051319  292146 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2194168f8292dc0469a2699229b86460c4906ab7e633f8753935c94ddff37855 \
	I1124 03:14:21.051360  292146 kubeadm.go:319] 	--control-plane 
	I1124 03:14:21.051370  292146 kubeadm.go:319] 
	I1124 03:14:21.051455  292146 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:14:21.051463  292146 kubeadm.go:319] 
	I1124 03:14:21.051547  292146 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 1ng5of.h0h75cft7s8kvxk0 \
	I1124 03:14:21.051695  292146 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2194168f8292dc0469a2699229b86460c4906ab7e633f8753935c94ddff37855 
	I1124 03:14:21.051713  292146 cni.go:84] Creating CNI manager for ""
	I1124 03:14:21.051724  292146 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:14:21.054847  292146 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 03:14:21.057751  292146 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:14:21.062069  292146 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:14:21.062089  292146 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:14:21.075276  292146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:14:21.367073  292146 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:14:21.367213  292146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:21.367296  292146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-153780 minikube.k8s.io/updated_at=2025_11_24T03_14_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=addons-153780 minikube.k8s.io/primary=true
	I1124 03:14:21.512955  292146 ops.go:34] apiserver oom_adj: -16
	I1124 03:14:21.513136  292146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:22.013265  292146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:22.514106  292146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:23.014107  292146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:23.513766  292146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:24.014124  292146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:24.513197  292146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:25.013867  292146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:25.513805  292146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:14:25.658870  292146 kubeadm.go:1114] duration metric: took 4.291710678s to wait for elevateKubeSystemPrivileges
	I1124 03:14:25.658906  292146 kubeadm.go:403] duration metric: took 21.710035602s to StartCluster
	I1124 03:14:25.658923  292146 settings.go:142] acquiring lock: {Name:mkbb4fe81234aed150705ba63a85f83232f71688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:25.659041  292146 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 03:14:25.659409  292146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/kubeconfig: {Name:mkb927d54c1c5489b1562496c31c4a11f46f8c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:14:25.659637  292146 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:14:25.659796  292146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:14:25.660063  292146 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:14:25.660108  292146 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1124 03:14:25.660202  292146 addons.go:70] Setting yakd=true in profile "addons-153780"
	I1124 03:14:25.660229  292146 addons.go:239] Setting addon yakd=true in "addons-153780"
	I1124 03:14:25.660262  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:25.660836  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.661302  292146 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-153780"
	I1124 03:14:25.661326  292146 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-153780"
	I1124 03:14:25.661350  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:25.661800  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.662308  292146 addons.go:70] Setting cloud-spanner=true in profile "addons-153780"
	I1124 03:14:25.662331  292146 addons.go:239] Setting addon cloud-spanner=true in "addons-153780"
	I1124 03:14:25.662354  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:25.662828  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.665413  292146 out.go:179] * Verifying Kubernetes components...
	I1124 03:14:25.666219  292146 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-153780"
	I1124 03:14:25.666275  292146 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-153780"
	I1124 03:14:25.666325  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:25.669291  292146 addons.go:70] Setting registry=true in profile "addons-153780"
	I1124 03:14:25.669318  292146 addons.go:239] Setting addon registry=true in "addons-153780"
	I1124 03:14:25.669374  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:25.669834  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.672502  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.682670  292146 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-153780"
	I1124 03:14:25.682776  292146 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-153780"
	I1124 03:14:25.682838  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:25.683354  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.683600  292146 addons.go:70] Setting registry-creds=true in profile "addons-153780"
	I1124 03:14:25.683615  292146 addons.go:239] Setting addon registry-creds=true in "addons-153780"
	I1124 03:14:25.683639  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:25.684037  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.704276  292146 addons.go:70] Setting storage-provisioner=true in profile "addons-153780"
	I1124 03:14:25.704320  292146 addons.go:239] Setting addon storage-provisioner=true in "addons-153780"
	I1124 03:14:25.704356  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:25.706059  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.706493  292146 addons.go:70] Setting default-storageclass=true in profile "addons-153780"
	I1124 03:14:25.706512  292146 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-153780"
	I1124 03:14:25.706793  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.726530  292146 addons.go:70] Setting gcp-auth=true in profile "addons-153780"
	I1124 03:14:25.726572  292146 mustload.go:66] Loading cluster: addons-153780
	I1124 03:14:25.726777  292146 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:14:25.727053  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.727340  292146 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-153780"
	I1124 03:14:25.727364  292146 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-153780"
	I1124 03:14:25.727631  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.741770  292146 addons.go:70] Setting ingress=true in profile "addons-153780"
	I1124 03:14:25.741802  292146 addons.go:239] Setting addon ingress=true in "addons-153780"
	I1124 03:14:25.741854  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:25.742356  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.744754  292146 addons.go:70] Setting volcano=true in profile "addons-153780"
	I1124 03:14:25.744782  292146 addons.go:239] Setting addon volcano=true in "addons-153780"
	I1124 03:14:25.744816  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:25.745286  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.760676  292146 addons.go:70] Setting volumesnapshots=true in profile "addons-153780"
	I1124 03:14:25.760711  292146 addons.go:239] Setting addon volumesnapshots=true in "addons-153780"
	I1124 03:14:25.760747  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:25.761235  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.781478  292146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:14:25.804374  292146 addons.go:70] Setting ingress-dns=true in profile "addons-153780"
	I1124 03:14:25.804465  292146 addons.go:239] Setting addon ingress-dns=true in "addons-153780"
	I1124 03:14:25.804540  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:25.805141  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.833649  292146 addons.go:70] Setting inspektor-gadget=true in profile "addons-153780"
	I1124 03:14:25.833733  292146 addons.go:239] Setting addon inspektor-gadget=true in "addons-153780"
	I1124 03:14:25.833788  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:25.834304  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.854198  292146 addons.go:70] Setting metrics-server=true in profile "addons-153780"
	I1124 03:14:25.854283  292146 addons.go:239] Setting addon metrics-server=true in "addons-153780"
	I1124 03:14:25.854354  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:25.855347  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:25.875645  292146 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1124 03:14:25.886007  292146 out.go:179]   - Using image docker.io/registry:3.0.0
	I1124 03:14:25.889966  292146 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1124 03:14:25.890027  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1124 03:14:25.890134  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:25.902175  292146 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1124 03:14:25.913362  292146 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 03:14:25.913440  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1124 03:14:25.913545  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:25.934739  292146 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1124 03:14:25.937860  292146 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1124 03:14:25.937892  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1124 03:14:25.937971  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:26.041760  292146 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1124 03:14:26.047200  292146 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 03:14:26.047230  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1124 03:14:26.047311  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:26.062235  292146 addons.go:239] Setting addon default-storageclass=true in "addons-153780"
	I1124 03:14:26.062285  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:26.066883  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:26.066979  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:26.069983  292146 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-153780"
	I1124 03:14:26.070072  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:26.072880  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:26.101589  292146 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1124 03:14:26.101595  292146 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1124 03:14:26.102226  292146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:14:26.104698  292146 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1124 03:14:26.104723  292146 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1124 03:14:26.104815  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:26.109888  292146 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1124 03:14:26.119388  292146 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 03:14:26.119412  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1124 03:14:26.119506  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	W1124 03:14:26.125083  292146 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1124 03:14:26.135667  292146 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:14:26.139121  292146 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 03:14:26.141297  292146 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:14:26.141316  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:14:26.141391  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:26.144689  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:26.155672  292146 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 03:14:26.135727  292146 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1124 03:14:26.160126  292146 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 03:14:26.160146  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1124 03:14:26.160214  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:26.163064  292146 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 03:14:26.163087  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1124 03:14:26.163151  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:26.190726  292146 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1124 03:14:26.190937  292146 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1124 03:14:26.192190  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:26.192602  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:26.196395  292146 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1124 03:14:26.197348  292146 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1124 03:14:26.197367  292146 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1124 03:14:26.197440  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:26.202364  292146 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1124 03:14:26.202588  292146 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1124 03:14:26.202603  292146 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1124 03:14:26.202677  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:26.212166  292146 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1124 03:14:26.212193  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1124 03:14:26.212252  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:26.212698  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:26.216031  292146 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1124 03:14:26.224694  292146 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1124 03:14:26.237469  292146 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1124 03:14:26.245663  292146 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1124 03:14:26.248573  292146 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1124 03:14:26.252635  292146 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1124 03:14:26.254679  292146 out.go:179]   - Using image docker.io/busybox:stable
	I1124 03:14:26.257566  292146 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 03:14:26.257589  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1124 03:14:26.257668  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:26.270513  292146 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1124 03:14:26.273718  292146 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1124 03:14:26.276647  292146 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1124 03:14:26.276680  292146 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1124 03:14:26.276747  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:26.295769  292146 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:14:26.295791  292146 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:14:26.295844  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:26.296007  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:26.317065  292146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:14:26.352619  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:26.357344  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:26.369370  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:26.377715  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:26.389956  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:26.407510  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:26.423166  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	W1124 03:14:26.429392  292146 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1124 03:14:26.429431  292146 retry.go:31] will retry after 320.525725ms: ssh: handshake failed: EOF
	I1124 03:14:26.436940  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:26.451705  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:26.456902  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:26.865829  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 03:14:26.912913  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1124 03:14:26.987802  292146 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1124 03:14:26.987826  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1124 03:14:27.000210  292146 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1124 03:14:27.000242  292146 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1124 03:14:27.017065  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 03:14:27.042647  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 03:14:27.125226  292146 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1124 03:14:27.125303  292146 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1124 03:14:27.132192  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 03:14:27.143986  292146 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1124 03:14:27.144062  292146 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1124 03:14:27.159732  292146 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1124 03:14:27.159807  292146 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1124 03:14:27.192185  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:14:27.242145  292146 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1124 03:14:27.242223  292146 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1124 03:14:27.242395  292146 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1124 03:14:27.242429  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1124 03:14:27.244686  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 03:14:27.246782  292146 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1124 03:14:27.246850  292146 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1124 03:14:27.269105  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1124 03:14:27.271105  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 03:14:27.273247  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:14:27.325778  292146 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1124 03:14:27.325852  292146 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1124 03:14:27.382639  292146 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 03:14:27.382715  292146 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1124 03:14:27.454362  292146 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1124 03:14:27.454479  292146 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1124 03:14:27.468000  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1124 03:14:27.479351  292146 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1124 03:14:27.479426  292146 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1124 03:14:27.493932  292146 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1124 03:14:27.494009  292146 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1124 03:14:27.555697  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 03:14:27.652907  292146 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1124 03:14:27.652982  292146 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1124 03:14:27.674997  292146 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1124 03:14:27.675072  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1124 03:14:27.762247  292146 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1124 03:14:27.762324  292146 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1124 03:14:27.830157  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1124 03:14:27.862016  292146 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1124 03:14:27.862094  292146 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1124 03:14:27.917175  292146 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1124 03:14:27.917257  292146 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1124 03:14:28.015244  292146 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.698102319s)
	I1124 03:14:28.015497  292146 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.913241568s)
	I1124 03:14:28.015645  292146 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1124 03:14:28.017010  292146 node_ready.go:35] waiting up to 6m0s for node "addons-153780" to be "Ready" ...
	I1124 03:14:28.040729  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.174865508s)
	I1124 03:14:28.176339  292146 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 03:14:28.176409  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1124 03:14:28.231621  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 03:14:28.290899  292146 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1124 03:14:28.290924  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1124 03:14:28.536929  292146 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-153780" context rescaled to 1 replicas
	I1124 03:14:28.687335  292146 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1124 03:14:28.687411  292146 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1124 03:14:28.949628  292146 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1124 03:14:28.949700  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1124 03:14:29.016217  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.103266144s)
	I1124 03:14:29.016343  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.999253663s)
	I1124 03:14:29.204784  292146 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1124 03:14:29.204855  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1124 03:14:29.339211  292146 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1124 03:14:29.339287  292146 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1124 03:14:29.437688  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1124 03:14:29.746059  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.613821766s)
	I1124 03:14:29.746248  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.703516167s)
	W1124 03:14:30.044317  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:14:30.407647  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.215358808s)
	I1124 03:14:31.922049  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.677285027s)
	I1124 03:14:31.922090  292146 addons.go:495] Verifying addon ingress=true in "addons-153780"
	I1124 03:14:31.922262  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.65308728s)
	I1124 03:14:31.922320  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.651153792s)
	I1124 03:14:31.922525  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.649196352s)
	I1124 03:14:31.922619  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.454549689s)
	I1124 03:14:31.922635  292146 addons.go:495] Verifying addon registry=true in "addons-153780"
	I1124 03:14:31.922700  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.36694058s)
	I1124 03:14:31.922712  292146 addons.go:495] Verifying addon metrics-server=true in "addons-153780"
	I1124 03:14:31.922750  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.092518519s)
	I1124 03:14:31.923127  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.691423266s)
	W1124 03:14:31.923317  292146 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 03:14:31.923360  292146 retry.go:31] will retry after 172.54789ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 03:14:31.926129  292146 out.go:179] * Verifying registry addon...
	I1124 03:14:31.928180  292146 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-153780 service yakd-dashboard -n yakd-dashboard
	
	I1124 03:14:31.928220  292146 out.go:179] * Verifying ingress addon...
	I1124 03:14:31.931137  292146 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1124 03:14:31.933143  292146 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1124 03:14:31.942708  292146 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1124 03:14:31.942734  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:31.943376  292146 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1124 03:14:31.943395  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 03:14:31.950207  292146 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1124 03:14:32.096087  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 03:14:32.220602  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.78281077s)
	I1124 03:14:32.220653  292146 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-153780"
	I1124 03:14:32.223999  292146 out.go:179] * Verifying csi-hostpath-driver addon...
	I1124 03:14:32.227828  292146 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1124 03:14:32.240180  292146 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1124 03:14:32.240213  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:32.435418  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:32.438707  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 03:14:32.520342  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:14:32.731138  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:32.935234  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:32.936834  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:33.231184  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:33.435652  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:33.436553  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:33.705943  292146 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1124 03:14:33.706030  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:33.724278  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:33.731870  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:33.839271  292146 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1124 03:14:33.852262  292146 addons.go:239] Setting addon gcp-auth=true in "addons-153780"
	I1124 03:14:33.852310  292146 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:14:33.852755  292146 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:14:33.870876  292146 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1124 03:14:33.870936  292146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:14:33.887558  292146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:14:33.934273  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:33.936863  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:34.231373  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:34.434437  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:34.436766  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 03:14:34.520796  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:14:34.731314  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:34.901693  292146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.80555755s)
	I1124 03:14:34.901782  292146 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.030881694s)
	I1124 03:14:34.904748  292146 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 03:14:34.908199  292146 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1124 03:14:34.910995  292146 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1124 03:14:34.911019  292146 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1124 03:14:34.924771  292146 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1124 03:14:34.924795  292146 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1124 03:14:34.935127  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:34.937956  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:34.942298  292146 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 03:14:34.942319  292146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1124 03:14:34.955726  292146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 03:14:35.232245  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:35.436229  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:35.455177  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:35.456603  292146 addons.go:495] Verifying addon gcp-auth=true in "addons-153780"
	I1124 03:14:35.459809  292146 out.go:179] * Verifying gcp-auth addon...
	I1124 03:14:35.463086  292146 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1124 03:14:35.550664  292146 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1124 03:14:35.550690  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:35.731517  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:35.934636  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:35.936899  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:35.966914  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:36.231357  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:36.434823  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:36.436932  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:36.467169  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:36.731175  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:36.934176  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:36.936156  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:36.966021  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1124 03:14:37.021258  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:14:37.231223  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:37.435143  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:37.436778  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:37.466636  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:37.731334  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:37.934426  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:37.936628  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:37.966529  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:38.230909  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:38.433925  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:38.436055  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:38.467108  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:38.730873  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:38.934997  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:38.936274  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:38.966322  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:39.231821  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:39.435587  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:39.435735  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:39.466660  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1124 03:14:39.520364  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:14:39.731737  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:39.935693  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:39.937637  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:39.966306  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:40.231825  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:40.434892  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:40.437204  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:40.467518  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:40.731247  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:40.934126  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:40.936059  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:40.965858  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:41.230506  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:41.434680  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:41.437105  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:41.466226  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:41.730963  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:41.933819  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:41.935806  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:41.966719  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1124 03:14:42.021289  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:14:42.232036  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:42.434999  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:42.435967  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:42.467166  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:42.730733  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:42.934996  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:42.937405  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:42.965945  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:43.230695  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:43.434540  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:43.436999  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:43.467323  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:43.730840  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:43.934668  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:43.936595  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:43.966673  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:44.232037  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:44.435024  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:44.437123  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:44.467662  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1124 03:14:44.520423  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:14:44.731543  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:44.935205  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:44.936625  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:44.966470  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:45.238734  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:45.435507  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:45.436033  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:45.466689  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:45.731711  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:45.936040  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:45.936178  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:45.967005  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:46.231855  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:46.435053  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:46.436865  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:46.467514  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:46.731479  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:46.935141  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:46.936675  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:46.966644  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1124 03:14:47.021105  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:14:47.231642  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:47.434824  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:47.437049  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:47.466755  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:47.730660  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:47.935463  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:47.937974  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:47.966745  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:48.231200  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:48.434516  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:48.436740  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:48.467293  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:48.731595  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:48.934854  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:48.937815  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:48.966447  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:49.232437  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:49.436764  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:49.437326  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:49.466269  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1124 03:14:49.519912  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:14:49.730799  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:49.934606  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:49.936548  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:49.967015  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:50.231142  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:50.435492  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:50.436664  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:50.466993  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:50.730833  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:50.935557  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:50.935683  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:50.967710  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:51.230880  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:51.438532  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:51.442174  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:51.465930  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1124 03:14:51.520952  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:14:51.731221  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:51.934511  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:51.936737  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:51.966588  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:52.231516  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:52.434624  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:52.437004  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:52.466419  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:52.731140  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:52.934130  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:52.936170  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:52.965978  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:53.231209  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:53.434205  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:53.436236  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:53.466243  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:53.731240  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:53.934434  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:53.936489  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:53.966405  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1124 03:14:54.020319  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:14:54.231409  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:54.435295  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:54.436397  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:54.472758  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:54.731501  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:54.934599  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:54.936616  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:54.966199  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:55.231045  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:55.433893  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:55.435964  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:55.466887  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:55.731704  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:55.934379  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:55.936395  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:55.966176  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1124 03:14:56.021057  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:14:56.231011  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:56.433905  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:56.436227  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:56.465996  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:56.730430  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:56.934265  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:56.936594  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:56.966514  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:57.231319  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:57.434350  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:57.436412  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:57.466267  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:57.730981  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:57.933931  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:57.936107  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:57.966751  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1124 03:14:58.021226  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:14:58.230987  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:58.435260  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:58.436443  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:58.466280  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:58.730964  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:58.935383  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:58.935543  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:58.966448  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:59.231103  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:59.435030  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:59.436449  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:59.466548  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:14:59.731431  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:14:59.934211  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:14:59.936537  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:14:59.966123  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:00.244054  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:00.461133  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:00.466166  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:00.469334  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1124 03:15:00.520889  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:15:00.731128  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:00.935919  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:00.936295  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:00.966688  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:01.231782  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:01.436669  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:01.436804  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:01.466897  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:01.731896  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:01.937913  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:01.937879  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:01.967011  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:02.230899  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:02.434875  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:02.436140  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:02.466949  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:02.731158  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:02.934842  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:02.936947  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:02.966496  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1124 03:15:03.020383  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:15:03.231344  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:03.434554  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:03.436885  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:03.466579  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:03.731743  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:03.935100  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:03.937303  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:03.966308  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:04.231364  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:04.434780  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:04.437046  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:04.466771  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:04.731414  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:04.934506  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:04.936525  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:04.966511  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1124 03:15:05.020630  292146 node_ready.go:57] node "addons-153780" has "Ready":"False" status (will retry)
	I1124 03:15:05.230799  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:05.435121  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:05.436272  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:05.466157  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:05.731163  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:05.934110  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:05.936455  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:05.966092  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:06.231327  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:06.434646  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:06.436664  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:06.466541  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:06.731247  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:06.934627  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:06.936321  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:06.965814  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:07.243312  292146 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1124 03:15:07.243338  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:07.518345  292146 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1124 03:15:07.518370  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:07.518938  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:07.529803  292146 node_ready.go:49] node "addons-153780" is "Ready"
	I1124 03:15:07.529843  292146 node_ready.go:38] duration metric: took 39.512761217s for node "addons-153780" to be "Ready" ...
	I1124 03:15:07.529858  292146 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:15:07.529916  292146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:15:07.537946  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:07.567856  292146 api_server.go:72] duration metric: took 41.908182081s to wait for apiserver process to appear ...
	I1124 03:15:07.567885  292146 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:15:07.567906  292146 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1124 03:15:07.583208  292146 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1124 03:15:07.584380  292146 api_server.go:141] control plane version: v1.34.1
	I1124 03:15:07.584408  292146 api_server.go:131] duration metric: took 16.514848ms to wait for apiserver health ...
	I1124 03:15:07.584418  292146 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:15:07.618264  292146 system_pods.go:59] 19 kube-system pods found
	I1124 03:15:07.618308  292146 system_pods.go:61] "coredns-66bc5c9577-8cjzz" [813205d7-0fc2-43b3-b09e-fd0adc0ce6f0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:15:07.618316  292146 system_pods.go:61] "csi-hostpath-attacher-0" [8cc94983-29b2-4964-ad78-8802ebd720ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 03:15:07.618325  292146 system_pods.go:61] "csi-hostpath-resizer-0" [aa4df875-9ab4-43ce-a426-3e5b33238e8f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 03:15:07.618336  292146 system_pods.go:61] "csi-hostpathplugin-bgmwp" [7ac34006-f82d-4c20-be37-84bb40a7f088] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 03:15:07.618351  292146 system_pods.go:61] "etcd-addons-153780" [7fccdbed-b6f5-44fc-84ed-ea8536e594a2] Running
	I1124 03:15:07.618356  292146 system_pods.go:61] "kindnet-l29tl" [5e804658-7ef3-4add-9c08-6bade404f062] Running
	I1124 03:15:07.618359  292146 system_pods.go:61] "kube-apiserver-addons-153780" [33128750-fdf0-4d19-ab27-35e1085f5427] Running
	I1124 03:15:07.618363  292146 system_pods.go:61] "kube-controller-manager-addons-153780" [32b7e482-1a0b-4345-99e4-1e6ba9820fa2] Running
	I1124 03:15:07.618368  292146 system_pods.go:61] "kube-ingress-dns-minikube" [9c7f31da-69b0-403d-8b5b-d77551be5987] Pending
	I1124 03:15:07.618373  292146 system_pods.go:61] "kube-proxy-5qvwc" [223de07d-a4d6-45d0-b693-86767f12aa77] Running
	I1124 03:15:07.618379  292146 system_pods.go:61] "kube-scheduler-addons-153780" [110900a6-740b-40d4-84f5-277228f10e28] Running
	I1124 03:15:07.618386  292146 system_pods.go:61] "metrics-server-85b7d694d7-k5xvk" [9b5678eb-b6ce-4ee5-bdb6-92da24f445f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 03:15:07.618399  292146 system_pods.go:61] "nvidia-device-plugin-daemonset-j7cvq" [3405d820-d287-4751-a138-a2c64aaf6375] Pending
	I1124 03:15:07.618403  292146 system_pods.go:61] "registry-6b586f9694-fhxm7" [37ea5e79-e46c-4241-ae8a-13e3a990caef] Pending
	I1124 03:15:07.618409  292146 system_pods.go:61] "registry-creds-764b6fb674-bk79n" [dc3ac97a-2ca5-48ca-9f54-00d5127f5172] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 03:15:07.618417  292146 system_pods.go:61] "registry-proxy-v264t" [ce8f2dcd-d97d-4ae3-96f5-94cb55bf9408] Pending
	I1124 03:15:07.618424  292146 system_pods.go:61] "snapshot-controller-7d9fbc56b8-b6xbm" [59bfcec8-0051-4bc8-941f-3a818d75ef33] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 03:15:07.618431  292146 system_pods.go:61] "snapshot-controller-7d9fbc56b8-dwczj" [e4675ce7-03b5-4c7d-93f5-fea2600be8e9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 03:15:07.618441  292146 system_pods.go:61] "storage-provisioner" [40735684-1273-4c6e-a78f-2682cfbeb780] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:15:07.618466  292146 system_pods.go:74] duration metric: took 34.042194ms to wait for pod list to return data ...
	I1124 03:15:07.618476  292146 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:15:07.628519  292146 default_sa.go:45] found service account: "default"
	I1124 03:15:07.628546  292146 default_sa.go:55] duration metric: took 10.064016ms for default service account to be created ...
	I1124 03:15:07.628556  292146 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:15:07.650230  292146 system_pods.go:86] 19 kube-system pods found
	I1124 03:15:07.650264  292146 system_pods.go:89] "coredns-66bc5c9577-8cjzz" [813205d7-0fc2-43b3-b09e-fd0adc0ce6f0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:15:07.650273  292146 system_pods.go:89] "csi-hostpath-attacher-0" [8cc94983-29b2-4964-ad78-8802ebd720ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 03:15:07.650282  292146 system_pods.go:89] "csi-hostpath-resizer-0" [aa4df875-9ab4-43ce-a426-3e5b33238e8f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 03:15:07.650289  292146 system_pods.go:89] "csi-hostpathplugin-bgmwp" [7ac34006-f82d-4c20-be37-84bb40a7f088] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 03:15:07.650294  292146 system_pods.go:89] "etcd-addons-153780" [7fccdbed-b6f5-44fc-84ed-ea8536e594a2] Running
	I1124 03:15:07.650304  292146 system_pods.go:89] "kindnet-l29tl" [5e804658-7ef3-4add-9c08-6bade404f062] Running
	I1124 03:15:07.650309  292146 system_pods.go:89] "kube-apiserver-addons-153780" [33128750-fdf0-4d19-ab27-35e1085f5427] Running
	I1124 03:15:07.650315  292146 system_pods.go:89] "kube-controller-manager-addons-153780" [32b7e482-1a0b-4345-99e4-1e6ba9820fa2] Running
	I1124 03:15:07.650320  292146 system_pods.go:89] "kube-ingress-dns-minikube" [9c7f31da-69b0-403d-8b5b-d77551be5987] Pending
	I1124 03:15:07.650335  292146 system_pods.go:89] "kube-proxy-5qvwc" [223de07d-a4d6-45d0-b693-86767f12aa77] Running
	I1124 03:15:07.650339  292146 system_pods.go:89] "kube-scheduler-addons-153780" [110900a6-740b-40d4-84f5-277228f10e28] Running
	I1124 03:15:07.650345  292146 system_pods.go:89] "metrics-server-85b7d694d7-k5xvk" [9b5678eb-b6ce-4ee5-bdb6-92da24f445f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 03:15:07.650356  292146 system_pods.go:89] "nvidia-device-plugin-daemonset-j7cvq" [3405d820-d287-4751-a138-a2c64aaf6375] Pending
	I1124 03:15:07.650361  292146 system_pods.go:89] "registry-6b586f9694-fhxm7" [37ea5e79-e46c-4241-ae8a-13e3a990caef] Pending
	I1124 03:15:07.650366  292146 system_pods.go:89] "registry-creds-764b6fb674-bk79n" [dc3ac97a-2ca5-48ca-9f54-00d5127f5172] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 03:15:07.650375  292146 system_pods.go:89] "registry-proxy-v264t" [ce8f2dcd-d97d-4ae3-96f5-94cb55bf9408] Pending
	I1124 03:15:07.650382  292146 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b6xbm" [59bfcec8-0051-4bc8-941f-3a818d75ef33] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 03:15:07.650390  292146 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dwczj" [e4675ce7-03b5-4c7d-93f5-fea2600be8e9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 03:15:07.650398  292146 system_pods.go:89] "storage-provisioner" [40735684-1273-4c6e-a78f-2682cfbeb780] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:15:07.650414  292146 retry.go:31] will retry after 260.213876ms: missing components: kube-dns
	I1124 03:15:07.739423  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:07.923803  292146 system_pods.go:86] 19 kube-system pods found
	I1124 03:15:07.923841  292146 system_pods.go:89] "coredns-66bc5c9577-8cjzz" [813205d7-0fc2-43b3-b09e-fd0adc0ce6f0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:15:07.923852  292146 system_pods.go:89] "csi-hostpath-attacher-0" [8cc94983-29b2-4964-ad78-8802ebd720ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 03:15:07.923860  292146 system_pods.go:89] "csi-hostpath-resizer-0" [aa4df875-9ab4-43ce-a426-3e5b33238e8f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 03:15:07.923868  292146 system_pods.go:89] "csi-hostpathplugin-bgmwp" [7ac34006-f82d-4c20-be37-84bb40a7f088] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 03:15:07.923877  292146 system_pods.go:89] "etcd-addons-153780" [7fccdbed-b6f5-44fc-84ed-ea8536e594a2] Running
	I1124 03:15:07.923882  292146 system_pods.go:89] "kindnet-l29tl" [5e804658-7ef3-4add-9c08-6bade404f062] Running
	I1124 03:15:07.923890  292146 system_pods.go:89] "kube-apiserver-addons-153780" [33128750-fdf0-4d19-ab27-35e1085f5427] Running
	I1124 03:15:07.923894  292146 system_pods.go:89] "kube-controller-manager-addons-153780" [32b7e482-1a0b-4345-99e4-1e6ba9820fa2] Running
	I1124 03:15:07.923908  292146 system_pods.go:89] "kube-ingress-dns-minikube" [9c7f31da-69b0-403d-8b5b-d77551be5987] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 03:15:07.923912  292146 system_pods.go:89] "kube-proxy-5qvwc" [223de07d-a4d6-45d0-b693-86767f12aa77] Running
	I1124 03:15:07.923923  292146 system_pods.go:89] "kube-scheduler-addons-153780" [110900a6-740b-40d4-84f5-277228f10e28] Running
	I1124 03:15:07.923930  292146 system_pods.go:89] "metrics-server-85b7d694d7-k5xvk" [9b5678eb-b6ce-4ee5-bdb6-92da24f445f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 03:15:07.923937  292146 system_pods.go:89] "nvidia-device-plugin-daemonset-j7cvq" [3405d820-d287-4751-a138-a2c64aaf6375] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 03:15:07.923947  292146 system_pods.go:89] "registry-6b586f9694-fhxm7" [37ea5e79-e46c-4241-ae8a-13e3a990caef] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 03:15:07.923952  292146 system_pods.go:89] "registry-creds-764b6fb674-bk79n" [dc3ac97a-2ca5-48ca-9f54-00d5127f5172] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 03:15:07.923969  292146 system_pods.go:89] "registry-proxy-v264t" [ce8f2dcd-d97d-4ae3-96f5-94cb55bf9408] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 03:15:07.923981  292146 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b6xbm" [59bfcec8-0051-4bc8-941f-3a818d75ef33] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 03:15:07.923987  292146 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dwczj" [e4675ce7-03b5-4c7d-93f5-fea2600be8e9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 03:15:07.923997  292146 system_pods.go:89] "storage-provisioner" [40735684-1273-4c6e-a78f-2682cfbeb780] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:15:07.924016  292146 retry.go:31] will retry after 352.354756ms: missing components: kube-dns
	I1124 03:15:08.039063  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:08.040925  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:08.041092  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:08.236484  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:08.355371  292146 system_pods.go:86] 19 kube-system pods found
	I1124 03:15:08.355413  292146 system_pods.go:89] "coredns-66bc5c9577-8cjzz" [813205d7-0fc2-43b3-b09e-fd0adc0ce6f0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:15:08.355425  292146 system_pods.go:89] "csi-hostpath-attacher-0" [8cc94983-29b2-4964-ad78-8802ebd720ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 03:15:08.355432  292146 system_pods.go:89] "csi-hostpath-resizer-0" [aa4df875-9ab4-43ce-a426-3e5b33238e8f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 03:15:08.355441  292146 system_pods.go:89] "csi-hostpathplugin-bgmwp" [7ac34006-f82d-4c20-be37-84bb40a7f088] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 03:15:08.355455  292146 system_pods.go:89] "etcd-addons-153780" [7fccdbed-b6f5-44fc-84ed-ea8536e594a2] Running
	I1124 03:15:08.355461  292146 system_pods.go:89] "kindnet-l29tl" [5e804658-7ef3-4add-9c08-6bade404f062] Running
	I1124 03:15:08.355473  292146 system_pods.go:89] "kube-apiserver-addons-153780" [33128750-fdf0-4d19-ab27-35e1085f5427] Running
	I1124 03:15:08.355483  292146 system_pods.go:89] "kube-controller-manager-addons-153780" [32b7e482-1a0b-4345-99e4-1e6ba9820fa2] Running
	I1124 03:15:08.355489  292146 system_pods.go:89] "kube-ingress-dns-minikube" [9c7f31da-69b0-403d-8b5b-d77551be5987] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 03:15:08.355493  292146 system_pods.go:89] "kube-proxy-5qvwc" [223de07d-a4d6-45d0-b693-86767f12aa77] Running
	I1124 03:15:08.355503  292146 system_pods.go:89] "kube-scheduler-addons-153780" [110900a6-740b-40d4-84f5-277228f10e28] Running
	I1124 03:15:08.355509  292146 system_pods.go:89] "metrics-server-85b7d694d7-k5xvk" [9b5678eb-b6ce-4ee5-bdb6-92da24f445f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 03:15:08.355515  292146 system_pods.go:89] "nvidia-device-plugin-daemonset-j7cvq" [3405d820-d287-4751-a138-a2c64aaf6375] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 03:15:08.355526  292146 system_pods.go:89] "registry-6b586f9694-fhxm7" [37ea5e79-e46c-4241-ae8a-13e3a990caef] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 03:15:08.355536  292146 system_pods.go:89] "registry-creds-764b6fb674-bk79n" [dc3ac97a-2ca5-48ca-9f54-00d5127f5172] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 03:15:08.355544  292146 system_pods.go:89] "registry-proxy-v264t" [ce8f2dcd-d97d-4ae3-96f5-94cb55bf9408] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 03:15:08.355553  292146 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b6xbm" [59bfcec8-0051-4bc8-941f-3a818d75ef33] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 03:15:08.355565  292146 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dwczj" [e4675ce7-03b5-4c7d-93f5-fea2600be8e9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 03:15:08.355576  292146 system_pods.go:89] "storage-provisioner" [40735684-1273-4c6e-a78f-2682cfbeb780] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:15:08.355591  292146 retry.go:31] will retry after 345.748341ms: missing components: kube-dns
	I1124 03:15:08.444567  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:08.445033  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:08.467009  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:08.706917  292146 system_pods.go:86] 19 kube-system pods found
	I1124 03:15:08.706956  292146 system_pods.go:89] "coredns-66bc5c9577-8cjzz" [813205d7-0fc2-43b3-b09e-fd0adc0ce6f0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:15:08.706965  292146 system_pods.go:89] "csi-hostpath-attacher-0" [8cc94983-29b2-4964-ad78-8802ebd720ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 03:15:08.706971  292146 system_pods.go:89] "csi-hostpath-resizer-0" [aa4df875-9ab4-43ce-a426-3e5b33238e8f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 03:15:08.706978  292146 system_pods.go:89] "csi-hostpathplugin-bgmwp" [7ac34006-f82d-4c20-be37-84bb40a7f088] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 03:15:08.706987  292146 system_pods.go:89] "etcd-addons-153780" [7fccdbed-b6f5-44fc-84ed-ea8536e594a2] Running
	I1124 03:15:08.706992  292146 system_pods.go:89] "kindnet-l29tl" [5e804658-7ef3-4add-9c08-6bade404f062] Running
	I1124 03:15:08.707002  292146 system_pods.go:89] "kube-apiserver-addons-153780" [33128750-fdf0-4d19-ab27-35e1085f5427] Running
	I1124 03:15:08.707006  292146 system_pods.go:89] "kube-controller-manager-addons-153780" [32b7e482-1a0b-4345-99e4-1e6ba9820fa2] Running
	I1124 03:15:08.707014  292146 system_pods.go:89] "kube-ingress-dns-minikube" [9c7f31da-69b0-403d-8b5b-d77551be5987] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 03:15:08.707021  292146 system_pods.go:89] "kube-proxy-5qvwc" [223de07d-a4d6-45d0-b693-86767f12aa77] Running
	I1124 03:15:08.707025  292146 system_pods.go:89] "kube-scheduler-addons-153780" [110900a6-740b-40d4-84f5-277228f10e28] Running
	I1124 03:15:08.707033  292146 system_pods.go:89] "metrics-server-85b7d694d7-k5xvk" [9b5678eb-b6ce-4ee5-bdb6-92da24f445f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 03:15:08.707043  292146 system_pods.go:89] "nvidia-device-plugin-daemonset-j7cvq" [3405d820-d287-4751-a138-a2c64aaf6375] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 03:15:08.707049  292146 system_pods.go:89] "registry-6b586f9694-fhxm7" [37ea5e79-e46c-4241-ae8a-13e3a990caef] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 03:15:08.707059  292146 system_pods.go:89] "registry-creds-764b6fb674-bk79n" [dc3ac97a-2ca5-48ca-9f54-00d5127f5172] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 03:15:08.707068  292146 system_pods.go:89] "registry-proxy-v264t" [ce8f2dcd-d97d-4ae3-96f5-94cb55bf9408] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 03:15:08.707074  292146 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b6xbm" [59bfcec8-0051-4bc8-941f-3a818d75ef33] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 03:15:08.707083  292146 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dwczj" [e4675ce7-03b5-4c7d-93f5-fea2600be8e9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 03:15:08.707089  292146 system_pods.go:89] "storage-provisioner" [40735684-1273-4c6e-a78f-2682cfbeb780] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:15:08.707105  292146 retry.go:31] will retry after 475.906663ms: missing components: kube-dns
	I1124 03:15:08.733325  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:08.941475  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:08.941804  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:08.967311  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:09.188987  292146 system_pods.go:86] 19 kube-system pods found
	I1124 03:15:09.189016  292146 system_pods.go:89] "coredns-66bc5c9577-8cjzz" [813205d7-0fc2-43b3-b09e-fd0adc0ce6f0] Running
	I1124 03:15:09.189026  292146 system_pods.go:89] "csi-hostpath-attacher-0" [8cc94983-29b2-4964-ad78-8802ebd720ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 03:15:09.189036  292146 system_pods.go:89] "csi-hostpath-resizer-0" [aa4df875-9ab4-43ce-a426-3e5b33238e8f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 03:15:09.189043  292146 system_pods.go:89] "csi-hostpathplugin-bgmwp" [7ac34006-f82d-4c20-be37-84bb40a7f088] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 03:15:09.189051  292146 system_pods.go:89] "etcd-addons-153780" [7fccdbed-b6f5-44fc-84ed-ea8536e594a2] Running
	I1124 03:15:09.189056  292146 system_pods.go:89] "kindnet-l29tl" [5e804658-7ef3-4add-9c08-6bade404f062] Running
	I1124 03:15:09.189069  292146 system_pods.go:89] "kube-apiserver-addons-153780" [33128750-fdf0-4d19-ab27-35e1085f5427] Running
	I1124 03:15:09.189074  292146 system_pods.go:89] "kube-controller-manager-addons-153780" [32b7e482-1a0b-4345-99e4-1e6ba9820fa2] Running
	I1124 03:15:09.189079  292146 system_pods.go:89] "kube-ingress-dns-minikube" [9c7f31da-69b0-403d-8b5b-d77551be5987] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 03:15:09.189083  292146 system_pods.go:89] "kube-proxy-5qvwc" [223de07d-a4d6-45d0-b693-86767f12aa77] Running
	I1124 03:15:09.189087  292146 system_pods.go:89] "kube-scheduler-addons-153780" [110900a6-740b-40d4-84f5-277228f10e28] Running
	I1124 03:15:09.189099  292146 system_pods.go:89] "metrics-server-85b7d694d7-k5xvk" [9b5678eb-b6ce-4ee5-bdb6-92da24f445f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 03:15:09.189108  292146 system_pods.go:89] "nvidia-device-plugin-daemonset-j7cvq" [3405d820-d287-4751-a138-a2c64aaf6375] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 03:15:09.189122  292146 system_pods.go:89] "registry-6b586f9694-fhxm7" [37ea5e79-e46c-4241-ae8a-13e3a990caef] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 03:15:09.189129  292146 system_pods.go:89] "registry-creds-764b6fb674-bk79n" [dc3ac97a-2ca5-48ca-9f54-00d5127f5172] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 03:15:09.189135  292146 system_pods.go:89] "registry-proxy-v264t" [ce8f2dcd-d97d-4ae3-96f5-94cb55bf9408] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 03:15:09.189142  292146 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b6xbm" [59bfcec8-0051-4bc8-941f-3a818d75ef33] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 03:15:09.189152  292146 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dwczj" [e4675ce7-03b5-4c7d-93f5-fea2600be8e9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 03:15:09.189158  292146 system_pods.go:89] "storage-provisioner" [40735684-1273-4c6e-a78f-2682cfbeb780] Running
	I1124 03:15:09.189167  292146 system_pods.go:126] duration metric: took 1.560604954s to wait for k8s-apps to be running ...
	I1124 03:15:09.189179  292146 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:15:09.189235  292146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:15:09.203834  292146 system_svc.go:56] duration metric: took 14.646576ms WaitForService to wait for kubelet
	I1124 03:15:09.203865  292146 kubeadm.go:587] duration metric: took 43.544194887s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:15:09.203883  292146 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:15:09.207492  292146 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 03:15:09.207528  292146 node_conditions.go:123] node cpu capacity is 2
	I1124 03:15:09.207542  292146 node_conditions.go:105] duration metric: took 3.652395ms to run NodePressure ...
	I1124 03:15:09.207554  292146 start.go:242] waiting for startup goroutines ...
	I1124 03:15:09.232737  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:09.438204  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:09.438447  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:09.537784  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:09.731758  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:09.939208  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:09.940540  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:09.969770  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:10.233510  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:10.439862  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:10.440144  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:10.466243  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:10.735602  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:10.943809  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:10.944224  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:11.040489  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:11.238603  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:11.440297  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:11.440835  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:11.471613  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:11.731496  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:11.953449  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:11.953475  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:11.979370  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:12.232223  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:12.435214  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:12.437625  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:12.466768  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:12.731576  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:12.936643  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:12.936744  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:12.967126  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:13.231922  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:13.434754  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:13.437431  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:13.466076  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:13.731960  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:13.937209  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:13.937629  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:13.967169  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:14.231520  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:14.441655  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:14.444049  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:14.466148  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:14.732291  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:14.935672  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:14.939116  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:14.967147  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:15.232378  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:15.435322  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:15.436746  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:15.467213  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:15.731956  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:15.937182  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:15.937724  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:15.966657  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:16.231956  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:16.436613  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:16.436728  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:16.466888  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:16.731482  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:16.934907  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:16.937693  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:16.967003  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:17.232818  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:17.435214  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:17.438102  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:17.466548  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:17.732240  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:17.936496  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:17.937304  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:17.967131  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:18.232156  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:18.435191  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:18.438344  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:18.471013  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:18.734591  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:18.937745  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:18.937935  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:18.967770  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:19.237008  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:19.440988  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:19.441766  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:19.542318  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:19.731868  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:19.936020  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:19.937841  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:19.967148  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:20.232087  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:20.434832  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:20.436529  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:20.467142  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:20.731755  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:20.937479  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:20.938382  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:20.967244  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:21.231912  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:21.433916  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:21.436346  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:21.466811  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:21.731750  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:21.942976  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:21.943510  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:21.967120  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:22.231301  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:22.434879  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:22.437019  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:22.466741  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:22.731595  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:22.953170  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:22.953545  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:23.047744  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:23.232151  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:23.434845  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:23.437980  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:23.467165  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:23.732158  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:23.934918  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:23.937929  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:23.967048  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:24.232056  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:24.452480  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:24.452937  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:24.469561  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:24.731560  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:24.940368  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:24.942409  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:24.969429  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:25.231939  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:25.434681  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:25.436496  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:25.466319  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:25.731721  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:25.935088  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:25.937105  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:25.966994  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:26.231325  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:26.437443  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:26.438177  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:26.466283  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:26.732602  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:26.937400  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:26.937738  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:26.967419  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:27.232218  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:27.434839  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:27.437878  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:27.467285  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:27.731901  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:27.937166  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:27.937537  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:27.966833  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:28.231965  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:28.438977  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:28.438988  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:28.467028  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:28.732352  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:28.937052  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:28.938285  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:28.966652  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:29.231514  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:29.436685  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:29.437269  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:29.466485  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:29.732344  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:29.939010  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:29.939169  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:29.966253  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:30.231668  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:30.437545  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:30.437929  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:30.466991  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:30.731753  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:30.937206  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:30.937717  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:30.967089  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:31.232801  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:31.434944  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:31.436671  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:31.469319  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:31.732201  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:31.941078  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:31.941467  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:31.966902  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:32.231673  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:32.435628  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:32.437027  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:32.466557  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:32.731602  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:32.947124  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:32.947286  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:32.971948  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:33.231263  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:33.439558  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:33.442345  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:33.466343  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:33.732310  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:33.936289  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:33.937928  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:33.967044  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:34.232016  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:34.438064  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:34.438621  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:34.466884  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:34.732066  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:34.934241  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:34.936376  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:34.966391  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:35.232549  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:35.435687  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:35.436951  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:35.467113  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:35.732305  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:35.934567  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:35.937067  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:35.967253  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:36.231906  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:36.437639  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:36.437894  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:36.466796  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:36.732545  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:36.940181  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:36.940832  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:37.039016  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:37.230819  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:37.436887  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:37.437166  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:37.466282  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:37.731423  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:37.937589  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:37.941253  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:37.979479  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:38.232552  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:38.435784  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:38.436115  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:38.465930  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:38.731262  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:38.935547  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:38.936684  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:38.970142  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:39.231618  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:39.436978  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:39.437735  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:39.466533  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:39.731923  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:39.937241  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:39.937596  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:39.966464  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:40.232549  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:40.435086  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:40.437956  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:40.466922  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:40.733023  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:40.936435  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:40.938391  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:40.966735  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:41.232078  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:41.434710  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:41.437183  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:41.466337  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:41.731926  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:41.942141  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:41.946672  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:42.007786  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:42.234649  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:42.435980  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:42.436430  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:42.466291  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:42.732597  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:42.937223  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:42.937391  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:42.968450  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:43.232434  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:43.434796  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:43.437243  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:43.466160  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:43.731739  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:43.934878  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:43.936714  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:43.966529  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:44.232022  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:44.436221  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:44.436410  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:44.465981  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:44.731728  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:44.934524  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:44.937108  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:44.966166  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:45.238362  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:45.434190  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:45.436821  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:45.466355  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:45.732038  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:45.936184  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:45.937003  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:45.966965  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:46.232119  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:46.435841  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:46.437457  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:46.466777  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:46.734990  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:46.934983  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:46.937236  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:46.966140  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:47.232276  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:47.434644  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:47.437202  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:47.466862  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:47.731469  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:47.934429  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 03:15:47.936343  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:47.966491  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:48.233596  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:48.436104  292146 kapi.go:107] duration metric: took 1m16.504968863s to wait for kubernetes.io/minikube-addons=registry ...
	I1124 03:15:48.436272  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:48.466520  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:48.731360  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:48.937272  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:48.965910  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:49.232172  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:49.436712  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:49.466652  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:49.732282  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:49.937042  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:49.967061  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:50.232933  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:50.437886  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:50.467831  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:50.731789  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:50.936762  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:50.966596  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:51.232471  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:51.436780  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:51.466547  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:51.731308  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:51.936458  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:51.966601  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:52.232170  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:52.437313  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:52.473242  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:52.741467  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:52.936929  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:52.966790  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:53.232095  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:53.436842  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:53.467441  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:53.732404  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:53.936407  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:53.966273  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:54.232422  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:54.439186  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:54.467199  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:54.733138  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:54.937452  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:54.966569  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:55.236180  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:55.438925  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:55.467385  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:55.732706  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:55.937423  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:55.966531  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 03:15:56.232287  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:56.436993  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:56.466950  292146 kapi.go:107] duration metric: took 1m21.003863869s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1124 03:15:56.470257  292146 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-153780 cluster.
	I1124 03:15:56.473466  292146 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1124 03:15:56.476745  292146 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1124 03:15:56.733265  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:56.937309  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:57.231808  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:57.437071  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:57.731686  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:57.937263  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:58.231651  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:58.436518  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:58.731236  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:58.937678  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:59.231337  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:59.437910  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:15:59.731555  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:15:59.941276  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:16:00.286409  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:16:00.439665  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:16:00.731995  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:16:00.938114  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:16:01.232190  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:16:01.436336  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:16:01.731379  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:16:01.937644  292146 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 03:16:02.235904  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:16:02.436885  292146 kapi.go:107] duration metric: took 1m30.503736927s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1124 03:16:02.731912  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:16:03.237180  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:16:03.732213  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:16:04.232197  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:16:04.732171  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:16:05.231142  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:16:05.732008  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:16:06.233864  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:16:06.732015  292146 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 03:16:07.238005  292146 kapi.go:107] duration metric: took 1m35.010177979s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1124 03:16:07.241115  292146 out.go:179] * Enabled addons: nvidia-device-plugin, cloud-spanner, registry-creds, amd-gpu-device-plugin, ingress-dns, storage-provisioner, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1124 03:16:07.244095  292146 addons.go:530] duration metric: took 1m41.583980373s for enable addons: enabled=[nvidia-device-plugin cloud-spanner registry-creds amd-gpu-device-plugin ingress-dns storage-provisioner inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1124 03:16:07.244157  292146 start.go:247] waiting for cluster config update ...
	I1124 03:16:07.244213  292146 start.go:256] writing updated cluster config ...
	I1124 03:16:07.244512  292146 ssh_runner.go:195] Run: rm -f paused
	I1124 03:16:07.253548  292146 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:16:07.274177  292146 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8cjzz" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:16:07.289938  292146 pod_ready.go:94] pod "coredns-66bc5c9577-8cjzz" is "Ready"
	I1124 03:16:07.289962  292146 pod_ready.go:86] duration metric: took 15.755312ms for pod "coredns-66bc5c9577-8cjzz" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:16:07.292995  292146 pod_ready.go:83] waiting for pod "etcd-addons-153780" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:16:07.300259  292146 pod_ready.go:94] pod "etcd-addons-153780" is "Ready"
	I1124 03:16:07.300288  292146 pod_ready.go:86] duration metric: took 7.263485ms for pod "etcd-addons-153780" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:16:07.314983  292146 pod_ready.go:83] waiting for pod "kube-apiserver-addons-153780" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:16:07.324250  292146 pod_ready.go:94] pod "kube-apiserver-addons-153780" is "Ready"
	I1124 03:16:07.324276  292146 pod_ready.go:86] duration metric: took 9.265282ms for pod "kube-apiserver-addons-153780" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:16:07.329050  292146 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-153780" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:16:07.658002  292146 pod_ready.go:94] pod "kube-controller-manager-addons-153780" is "Ready"
	I1124 03:16:07.658034  292146 pod_ready.go:86] duration metric: took 328.954925ms for pod "kube-controller-manager-addons-153780" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:16:07.858570  292146 pod_ready.go:83] waiting for pod "kube-proxy-5qvwc" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:16:08.258030  292146 pod_ready.go:94] pod "kube-proxy-5qvwc" is "Ready"
	I1124 03:16:08.258065  292146 pod_ready.go:86] duration metric: took 399.466171ms for pod "kube-proxy-5qvwc" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:16:08.458608  292146 pod_ready.go:83] waiting for pod "kube-scheduler-addons-153780" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:16:08.857956  292146 pod_ready.go:94] pod "kube-scheduler-addons-153780" is "Ready"
	I1124 03:16:08.857988  292146 pod_ready.go:86] duration metric: took 399.349403ms for pod "kube-scheduler-addons-153780" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:16:08.858003  292146 pod_ready.go:40] duration metric: took 1.604414917s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:16:08.914902  292146 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 03:16:08.918167  292146 out.go:179] * Done! kubectl is now configured to use "addons-153780" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 03:16:51 addons-153780 crio[830]: time="2025-11-24T03:16:51.858588248Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-eefc238c-13a7-4139-bcbc-502e91e6b046 Namespace:local-path-storage ID:51cd81c2170cfd7c953e644ccbf5df5e10d9d5400ac41e5221efb237636ca445 UID:f7503d49-6338-43aa-a55a-0ae6108a9697 NetNS:/var/run/netns/755423d6-98d2-45e3-9130-11e3b1d1a831 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400121a3a8}] Aliases:map[]}"
	Nov 24 03:16:51 addons-153780 crio[830]: time="2025-11-24T03:16:51.8587423Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-eefc238c-13a7-4139-bcbc-502e91e6b046 for CNI network kindnet (type=ptp)"
	Nov 24 03:16:51 addons-153780 crio[830]: time="2025-11-24T03:16:51.862319618Z" level=info msg="Ran pod sandbox 51cd81c2170cfd7c953e644ccbf5df5e10d9d5400ac41e5221efb237636ca445 with infra container: local-path-storage/helper-pod-delete-pvc-eefc238c-13a7-4139-bcbc-502e91e6b046/POD" id=e38d07a3-1c51-4cf9-acb4-e3f9b1af8804 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 03:16:51 addons-153780 crio[830]: time="2025-11-24T03:16:51.867516772Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=623b761c-969b-4970-a0d7-48710aaa58a5 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:16:51 addons-153780 crio[830]: time="2025-11-24T03:16:51.873503341Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=94c7fd68-2986-40ec-857c-f995ef00e044 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:16:51 addons-153780 crio[830]: time="2025-11-24T03:16:51.883919049Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-eefc238c-13a7-4139-bcbc-502e91e6b046/helper-pod" id=c5a12433-ddaa-49de-a6a0-3c4e0a2cc297 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:16:51 addons-153780 crio[830]: time="2025-11-24T03:16:51.884027193Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:16:51 addons-153780 crio[830]: time="2025-11-24T03:16:51.892196885Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:16:51 addons-153780 crio[830]: time="2025-11-24T03:16:51.892673599Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:16:51 addons-153780 crio[830]: time="2025-11-24T03:16:51.910978143Z" level=info msg="Created container 1c7bb03f866f7f401e99ffd1ae6f47f6ee627fb2d29c68c1c3597fecac723ed6: local-path-storage/helper-pod-delete-pvc-eefc238c-13a7-4139-bcbc-502e91e6b046/helper-pod" id=c5a12433-ddaa-49de-a6a0-3c4e0a2cc297 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:16:51 addons-153780 crio[830]: time="2025-11-24T03:16:51.911813202Z" level=info msg="Starting container: 1c7bb03f866f7f401e99ffd1ae6f47f6ee627fb2d29c68c1c3597fecac723ed6" id=4b921b85-e459-45a9-b2b4-51f1a574daea name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:16:51 addons-153780 crio[830]: time="2025-11-24T03:16:51.920618829Z" level=info msg="Started container" PID=5801 containerID=1c7bb03f866f7f401e99ffd1ae6f47f6ee627fb2d29c68c1c3597fecac723ed6 description=local-path-storage/helper-pod-delete-pvc-eefc238c-13a7-4139-bcbc-502e91e6b046/helper-pod id=4b921b85-e459-45a9-b2b4-51f1a574daea name=/runtime.v1.RuntimeService/StartContainer sandboxID=51cd81c2170cfd7c953e644ccbf5df5e10d9d5400ac41e5221efb237636ca445
	Nov 24 03:16:52 addons-153780 crio[830]: time="2025-11-24T03:16:52.618684664Z" level=info msg="Stopping container: 9d8a340533682c0939de9bce595af65af06bef8ec65c0252dd702c708161c933 (timeout: 30s)" id=262b5081-a73c-4536-9d6d-0b057bd2c3c8 name=/runtime.v1.RuntimeService/StopContainer
	Nov 24 03:16:52 addons-153780 crio[830]: time="2025-11-24T03:16:52.733831858Z" level=info msg="Stopped container 9d8a340533682c0939de9bce595af65af06bef8ec65c0252dd702c708161c933: default/task-pv-pod/task-pv-container" id=262b5081-a73c-4536-9d6d-0b057bd2c3c8 name=/runtime.v1.RuntimeService/StopContainer
	Nov 24 03:16:52 addons-153780 crio[830]: time="2025-11-24T03:16:52.734530498Z" level=info msg="Stopping pod sandbox: c01ffab9fd7019368fb87958cf8df6f8b4797856dee4b93a1b93a8f7b98148f3" id=aaa3c410-fca2-4c18-82fa-a21c51a720ca name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 24 03:16:52 addons-153780 crio[830]: time="2025-11-24T03:16:52.734770557Z" level=info msg="Got pod network &{Name:task-pv-pod Namespace:default ID:c01ffab9fd7019368fb87958cf8df6f8b4797856dee4b93a1b93a8f7b98148f3 UID:3e35b203-63ec-4a61-862e-f5027e1bf54d NetNS:/var/run/netns/df2c758e-063d-487a-b0fd-cb908db94145 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400121abf8}] Aliases:map[]}"
	Nov 24 03:16:52 addons-153780 crio[830]: time="2025-11-24T03:16:52.734915321Z" level=info msg="Deleting pod default_task-pv-pod from CNI network \"kindnet\" (type=ptp)"
	Nov 24 03:16:52 addons-153780 crio[830]: time="2025-11-24T03:16:52.768790414Z" level=info msg="Stopped pod sandbox: c01ffab9fd7019368fb87958cf8df6f8b4797856dee4b93a1b93a8f7b98148f3" id=aaa3c410-fca2-4c18-82fa-a21c51a720ca name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 24 03:16:53 addons-153780 crio[830]: time="2025-11-24T03:16:53.408641815Z" level=info msg="Stopping pod sandbox: 51cd81c2170cfd7c953e644ccbf5df5e10d9d5400ac41e5221efb237636ca445" id=f44b58eb-22de-431c-a8b1-6d2787d26bf9 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 24 03:16:53 addons-153780 crio[830]: time="2025-11-24T03:16:53.408908615Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-eefc238c-13a7-4139-bcbc-502e91e6b046 Namespace:local-path-storage ID:51cd81c2170cfd7c953e644ccbf5df5e10d9d5400ac41e5221efb237636ca445 UID:f7503d49-6338-43aa-a55a-0ae6108a9697 NetNS:/var/run/netns/755423d6-98d2-45e3-9130-11e3b1d1a831 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40027cf798}] Aliases:map[]}"
	Nov 24 03:16:53 addons-153780 crio[830]: time="2025-11-24T03:16:53.409044977Z" level=info msg="Deleting pod local-path-storage_helper-pod-delete-pvc-eefc238c-13a7-4139-bcbc-502e91e6b046 from CNI network \"kindnet\" (type=ptp)"
	Nov 24 03:16:53 addons-153780 crio[830]: time="2025-11-24T03:16:53.415617437Z" level=info msg="Removing container: 9d8a340533682c0939de9bce595af65af06bef8ec65c0252dd702c708161c933" id=97c10f8e-97df-4f4c-a762-6941bc73a96c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 03:16:53 addons-153780 crio[830]: time="2025-11-24T03:16:53.419779474Z" level=info msg="Error loading conmon cgroup of container 9d8a340533682c0939de9bce595af65af06bef8ec65c0252dd702c708161c933: cgroup deleted" id=97c10f8e-97df-4f4c-a762-6941bc73a96c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 03:16:53 addons-153780 crio[830]: time="2025-11-24T03:16:53.449490372Z" level=info msg="Removed container 9d8a340533682c0939de9bce595af65af06bef8ec65c0252dd702c708161c933: default/task-pv-pod/task-pv-container" id=97c10f8e-97df-4f4c-a762-6941bc73a96c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 03:16:53 addons-153780 crio[830]: time="2025-11-24T03:16:53.468678501Z" level=info msg="Stopped pod sandbox: 51cd81c2170cfd7c953e644ccbf5df5e10d9d5400ac41e5221efb237636ca445" id=f44b58eb-22de-431c-a8b1-6d2787d26bf9 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	1c7bb03f866f7       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             6 seconds ago        Exited              helper-pod                               0                   51cd81c2170cf       helper-pod-delete-pvc-eefc238c-13a7-4139-bcbc-502e91e6b046   local-path-storage
	f64cf3831b44b       docker.io/library/busybox@sha256:079b4a73854a059a2073c6e1a031b17fcbf23a47c6c59ae760d78045199e403c                                            10 seconds ago       Exited              busybox                                  0                   b07dd3e7ad635       test-local-path                                              default
	8707e0ecb8288       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            13 seconds ago       Exited              helper-pod                               0                   2230131ad6a0c       helper-pod-create-pvc-eefc238c-13a7-4139-bcbc-502e91e6b046   local-path-storage
	80b5c6cab402e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          46 seconds ago       Running             busybox                                  0                   ed2481873fecd       busybox                                                      default
	3485678af5d19       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          51 seconds ago       Running             csi-snapshotter                          0                   bb3baae21d174       csi-hostpathplugin-bgmwp                                     kube-system
	7548077813b01       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          53 seconds ago       Running             csi-provisioner                          0                   bb3baae21d174       csi-hostpathplugin-bgmwp                                     kube-system
	8afdc0c7272e7       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            54 seconds ago       Running             liveness-probe                           0                   bb3baae21d174       csi-hostpathplugin-bgmwp                                     kube-system
	29d02ce914d44       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           55 seconds ago       Running             hostpath                                 0                   bb3baae21d174       csi-hostpathplugin-bgmwp                                     kube-system
	c4e122d9cd92b       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             56 seconds ago       Running             controller                               0                   1c49a9f3d02ce       ingress-nginx-controller-6c8bf45fb-pkh2n                     ingress-nginx
	e8d8201b249e1       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 About a minute ago   Running             gcp-auth                                 0                   581b238386602       gcp-auth-78565c9fb4-2jxmt                                    gcp-auth
	f81bf2fb9f067       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            About a minute ago   Running             gadget                                   0                   46bf47846e0ed       gadget-xjjvh                                                 gadget
	9af2707445bb4       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                About a minute ago   Running             node-driver-registrar                    0                   bb3baae21d174       csi-hostpathplugin-bgmwp                                     kube-system
	f1836e8795e70       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              About a minute ago   Running             registry-proxy                           0                   aec4962db8d63       registry-proxy-v264t                                         kube-system
	bcef0ab7ff5ee       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   f313cd20b2a72       csi-hostpath-attacher-0                                      kube-system
	cdab3db25b699       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   4193d0241873f       yakd-dashboard-5ff678cb9-t6r26                               yakd-dashboard
	53665e5932341       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             About a minute ago   Exited              patch                                    2                   1c1daf467bbc7       ingress-nginx-admission-patch-gn8kb                          ingress-nginx
	87ca81f329f99       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   f9242f30df386       snapshot-controller-7d9fbc56b8-dwczj                         kube-system
	a3a1370782e11       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   990abaec552af       local-path-provisioner-648f6765c9-h9x7r                      local-path-storage
	e055a401fe670       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   9a612b4b01bb4       nvidia-device-plugin-daemonset-j7cvq                         kube-system
	40df276efacb0       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   bb4faa353df6a       snapshot-controller-7d9fbc56b8-b6xbm                         kube-system
	be428aa3a2a99       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   About a minute ago   Exited              create                                   0                   be3e4d9577b59       ingress-nginx-admission-create-bjhlt                         ingress-nginx
	99c706b4665c8       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   bb08308253b2b       csi-hostpath-resizer-0                                       kube-system
	dd5492a96c1be       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               About a minute ago   Running             cloud-spanner-emulator                   0                   0a6826b5451db       cloud-spanner-emulator-5bdddb765-gp9qf                       default
	e9e5ba99ab47b       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   bb3baae21d174       csi-hostpathplugin-bgmwp                                     kube-system
	4bf7144c7e3cb       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   4189cc4637430       registry-6b586f9694-fhxm7                                    kube-system
	e0d73582da9fb       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   38c7a6f396cd1       kube-ingress-dns-minikube                                    kube-system
	d731175eced00       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   241a1bbad6257       metrics-server-85b7d694d7-k5xvk                              kube-system
	83cadb364a123       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   dec83e73546e1       storage-provisioner                                          kube-system
	122b5b3da819c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   45083915f69a1       coredns-66bc5c9577-8cjzz                                     kube-system
	8549937bedbf1       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   ea682d6f2cc0e       kube-proxy-5qvwc                                             kube-system
	243940847a312       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   37d03133a0aac       kindnet-l29tl                                                kube-system
	78b45913b9ad7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   c4a55db6a6390       etcd-addons-153780                                           kube-system
	338b16a84542c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   b966f0f5f7f5d       kube-apiserver-addons-153780                                 kube-system
	9d6da0d20171f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   69ee129826d8d       kube-controller-manager-addons-153780                        kube-system
	44d60dc8fd9be       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   2dfeff642656b       kube-scheduler-addons-153780                                 kube-system
	
	
	==> coredns [122b5b3da819c62b28e08a0e8f492efb939d921f5cc87f9f5dd30d4354f4c824] <==
	[INFO] 10.244.0.15:36810 - 13947 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001673433s
	[INFO] 10.244.0.15:36810 - 14776 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00026159s
	[INFO] 10.244.0.15:36810 - 28268 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000193626s
	[INFO] 10.244.0.15:56159 - 14413 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000172285s
	[INFO] 10.244.0.15:56159 - 14166 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000252376s
	[INFO] 10.244.0.15:42234 - 56094 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000135345s
	[INFO] 10.244.0.15:42234 - 56547 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0001761s
	[INFO] 10.244.0.15:48931 - 758 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000130504s
	[INFO] 10.244.0.15:48931 - 322 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00011356s
	[INFO] 10.244.0.15:60061 - 58399 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001179578s
	[INFO] 10.244.0.15:60061 - 58852 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001178996s
	[INFO] 10.244.0.15:56263 - 41724 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000153019s
	[INFO] 10.244.0.15:56263 - 41312 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000228358s
	[INFO] 10.244.0.20:39443 - 34542 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000187808s
	[INFO] 10.244.0.20:34755 - 36988 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000194413s
	[INFO] 10.244.0.20:57268 - 49970 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000115217s
	[INFO] 10.244.0.20:42364 - 61614 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000378268s
	[INFO] 10.244.0.20:35784 - 33968 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000120994s
	[INFO] 10.244.0.20:39222 - 41340 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000126713s
	[INFO] 10.244.0.20:60569 - 52252 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002022343s
	[INFO] 10.244.0.20:40069 - 627 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002995158s
	[INFO] 10.244.0.20:46483 - 28196 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000954494s
	[INFO] 10.244.0.20:38857 - 52313 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001736523s
	[INFO] 10.244.0.23:39001 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000184478s
	[INFO] 10.244.0.23:34598 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000091791s
	
	
	==> describe nodes <==
	Name:               addons-153780
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-153780
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=addons-153780
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_14_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-153780
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-153780"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:14:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-153780
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:16:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:16:53 +0000   Mon, 24 Nov 2025 03:14:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:16:53 +0000   Mon, 24 Nov 2025 03:14:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:16:53 +0000   Mon, 24 Nov 2025 03:14:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:16:53 +0000   Mon, 24 Nov 2025 03:15:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-153780
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 304a86241bf1bbb85bd31db5692386d7
	  System UUID:                d03a0b76-1e9c-4c87-8eaa-1652e42b6d37
	  Boot ID:                    e6ca431c-3a35-478f-87f6-f49cc4bc8a65
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  default                     cloud-spanner-emulator-5bdddb765-gp9qf      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  gadget                      gadget-xjjvh                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  gcp-auth                    gcp-auth-78565c9fb4-2jxmt                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-pkh2n    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m28s
	  kube-system                 coredns-66bc5c9577-8cjzz                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m34s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 csi-hostpathplugin-bgmwp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 etcd-addons-153780                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m39s
	  kube-system                 kindnet-l29tl                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m34s
	  kube-system                 kube-apiserver-addons-153780                250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m39s
	  kube-system                 kube-controller-manager-addons-153780       200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m39s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-proxy-5qvwc                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 kube-scheduler-addons-153780                100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m40s
	  kube-system                 metrics-server-85b7d694d7-k5xvk             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m29s
	  kube-system                 nvidia-device-plugin-daemonset-j7cvq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 registry-6b586f9694-fhxm7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 registry-creds-764b6fb674-bk79n             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 registry-proxy-v264t                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 snapshot-controller-7d9fbc56b8-b6xbm        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 snapshot-controller-7d9fbc56b8-dwczj        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  local-path-storage          local-path-provisioner-648f6765c9-h9x7r     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-t6r26              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m32s                  kube-proxy       
	  Normal   Starting                 2m46s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m46s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m46s (x8 over 2m46s)  kubelet          Node addons-153780 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m46s (x8 over 2m46s)  kubelet          Node addons-153780 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m46s (x8 over 2m46s)  kubelet          Node addons-153780 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m39s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m39s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m39s                  kubelet          Node addons-153780 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m39s                  kubelet          Node addons-153780 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m39s                  kubelet          Node addons-153780 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m35s                  node-controller  Node addons-153780 event: Registered Node addons-153780 in Controller
	  Normal   NodeReady                112s                   kubelet          Node addons-153780 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 01:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014604] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.520213] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036736] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.794505] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.307568] kauditd_printk_skb: 36 callbacks suppressed
	[Nov24 03:08] hrtimer: interrupt took 4583507 ns
	[Nov24 03:11] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 03:14] overlayfs: idmapped layers are currently not supported
	[  +0.056945] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [78b45913b9ad783248086bdaf8ec66bb8abed626933418187bc0d43337f720ce] <==
	{"level":"warn","ts":"2025-11-24T03:14:16.106132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.126861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.141355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.158568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.172950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.195137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.211023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.234807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.256072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.265739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.283612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.299482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.315220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.329111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.341565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.376827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.389732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.412442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:16.502674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:32.534779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:32.555207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:54.457595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:54.470978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:54.501931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:14:54.524240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51792","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [e8d8201b249e18df44325c8535a62429e576a461eb3c2193e682d2a5750823fd] <==
	2025/11/24 03:15:55 GCP Auth Webhook started!
	2025/11/24 03:16:09 Ready to marshal response ...
	2025/11/24 03:16:09 Ready to write response ...
	2025/11/24 03:16:09 Ready to marshal response ...
	2025/11/24 03:16:09 Ready to write response ...
	2025/11/24 03:16:09 Ready to marshal response ...
	2025/11/24 03:16:09 Ready to write response ...
	2025/11/24 03:16:31 Ready to marshal response ...
	2025/11/24 03:16:31 Ready to write response ...
	2025/11/24 03:16:39 Ready to marshal response ...
	2025/11/24 03:16:39 Ready to write response ...
	2025/11/24 03:16:42 Ready to marshal response ...
	2025/11/24 03:16:42 Ready to write response ...
	2025/11/24 03:16:42 Ready to marshal response ...
	2025/11/24 03:16:42 Ready to write response ...
	2025/11/24 03:16:51 Ready to marshal response ...
	2025/11/24 03:16:51 Ready to write response ...
	
	
	==> kernel <==
	 03:16:59 up  1:59,  0 user,  load average: 3.16, 2.78, 3.06
	Linux addons-153780 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [243940847a312c0327ab19daf598258d6deca45c2a27efb8e89e875e0d87c684] <==
	I1124 03:14:58.121582       1 controller.go:711] "Syncing nftables rules"
	I1124 03:15:06.626468       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:15:06.626510       1 main.go:301] handling current node
	I1124 03:15:16.622613       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:15:16.622648       1 main.go:301] handling current node
	I1124 03:15:26.620526       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:15:26.620553       1 main.go:301] handling current node
	I1124 03:15:36.621675       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:15:36.621704       1 main.go:301] handling current node
	I1124 03:15:46.620597       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:15:46.620630       1 main.go:301] handling current node
	I1124 03:15:56.620536       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:15:56.620571       1 main.go:301] handling current node
	I1124 03:16:06.620548       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:16:06.620631       1 main.go:301] handling current node
	I1124 03:16:16.621255       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:16:16.621317       1 main.go:301] handling current node
	I1124 03:16:26.621903       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:16:26.621937       1 main.go:301] handling current node
	I1124 03:16:36.620557       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:16:36.620638       1 main.go:301] handling current node
	I1124 03:16:46.619853       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:16:46.619894       1 main.go:301] handling current node
	I1124 03:16:56.619871       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:16:56.619920       1 main.go:301] handling current node
	
	
	==> kube-apiserver [338b16a84542c2f340889c798911ea671b465c1749548d747c2fff6223b15a89] <==
	I1124 03:14:35.331861       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.96.80.42"}
	W1124 03:14:54.456128       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1124 03:14:54.470631       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1124 03:14:54.501915       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1124 03:14:54.517881       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1124 03:15:07.106159       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.80.42:443: connect: connection refused
	E1124 03:15:07.106299       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.80.42:443: connect: connection refused" logger="UnhandledError"
	W1124 03:15:07.109609       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.80.42:443: connect: connection refused
	E1124 03:15:07.109713       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.80.42:443: connect: connection refused" logger="UnhandledError"
	W1124 03:15:07.188692       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.80.42:443: connect: connection refused
	E1124 03:15:07.188733       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.80.42:443: connect: connection refused" logger="UnhandledError"
	W1124 03:15:11.827747       1 handler_proxy.go:99] no RequestInfo found in the context
	E1124 03:15:11.827828       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1124 03:15:11.829008       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.66.229:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.66.229:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.66.229:443: connect: connection refused" logger="UnhandledError"
	E1124 03:15:11.839554       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.66.229:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.66.229:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.66.229:443: connect: connection refused" logger="UnhandledError"
	E1124 03:15:11.840255       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.66.229:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.66.229:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.66.229:443: connect: connection refused" logger="UnhandledError"
	E1124 03:15:11.851034       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.66.229:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.66.229:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.66.229:443: connect: connection refused" logger="UnhandledError"
	I1124 03:15:11.996285       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1124 03:16:18.969642       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:60580: use of closed network connection
	E1124 03:16:19.216840       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:60606: use of closed network connection
	E1124 03:16:19.352296       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:60636: use of closed network connection
	I1124 03:16:51.435680       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [9d6da0d20171f4ca6805881f1293dd5146dfe8c43773853626d869062a6287b6] <==
	I1124 03:14:24.468001       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:14:24.469134       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 03:14:24.469191       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 03:14:24.469236       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 03:14:24.469630       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 03:14:24.469874       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 03:14:24.471608       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 03:14:24.471831       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 03:14:24.472348       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 03:14:24.472652       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 03:14:24.478962       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 03:14:24.481426       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:14:24.487628       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 03:14:24.518009       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:14:24.518096       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 03:14:24.518126       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1124 03:14:30.499025       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1124 03:14:54.448212       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1124 03:14:54.448366       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1124 03:14:54.448425       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1124 03:14:54.490404       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1124 03:14:54.494390       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1124 03:14:54.549256       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:14:54.595578       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:15:09.460078       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [8549937bedbf1ccf4f98e083e6dcbaa5697df18822718fd0d27b79e6e76f8942] <==
	I1124 03:14:26.676486       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:14:26.761007       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:14:26.862652       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:14:26.862721       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 03:14:26.862829       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:14:26.922428       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:14:26.922534       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:14:26.932335       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:14:26.932667       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:14:26.932682       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:14:26.933952       1 config.go:200] "Starting service config controller"
	I1124 03:14:26.933962       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:14:26.933977       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:14:26.933981       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:14:26.933995       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:14:26.933999       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:14:26.937967       1 config.go:309] "Starting node config controller"
	I1124 03:14:26.937985       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:14:26.937993       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 03:14:27.034446       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 03:14:27.034509       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 03:14:27.034574       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [44d60dc8fd9beb0e354e25ab604f6137163390956f8c04bf67e790db61625fb4] <==
	I1124 03:14:17.774550       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1124 03:14:17.773901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 03:14:17.773844       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 03:14:17.777177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 03:14:17.777462       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 03:14:17.777595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 03:14:17.777720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 03:14:17.777837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 03:14:17.777952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 03:14:17.778063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 03:14:17.778176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 03:14:17.778278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 03:14:17.778521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1124 03:14:17.778587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 03:14:17.784050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 03:14:17.784170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 03:14:17.784175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 03:14:17.784224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 03:14:17.784229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 03:14:17.784274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 03:14:18.607879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 03:14:18.616901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 03:14:18.658416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 03:14:18.662011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1124 03:14:19.376972       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:16:52 addons-153780 kubelet[1282]: I1124 03:16:52.888607    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^fd92f65e-c8e3-11f0-9c36-023893081759" (OuterVolumeSpecName: "task-pv-storage") pod "3e35b203-63ec-4a61-862e-f5027e1bf54d" (UID: "3e35b203-63ec-4a61-862e-f5027e1bf54d"). InnerVolumeSpecName "pvc-e295a5be-6b2a-44dc-ad57-44e500ee0779". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Nov 24 03:16:52 addons-153780 kubelet[1282]: I1124 03:16:52.977478    1282 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wg6pw\" (UniqueName: \"kubernetes.io/projected/3e35b203-63ec-4a61-862e-f5027e1bf54d-kube-api-access-wg6pw\") on node \"addons-153780\" DevicePath \"\""
	Nov 24 03:16:52 addons-153780 kubelet[1282]: I1124 03:16:52.977568    1282 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-e295a5be-6b2a-44dc-ad57-44e500ee0779\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^fd92f65e-c8e3-11f0-9c36-023893081759\") on node \"addons-153780\" "
	Nov 24 03:16:52 addons-153780 kubelet[1282]: I1124 03:16:52.984867    1282 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-e295a5be-6b2a-44dc-ad57-44e500ee0779" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^fd92f65e-c8e3-11f0-9c36-023893081759") on node "addons-153780"
	Nov 24 03:16:53 addons-153780 kubelet[1282]: I1124 03:16:53.078896    1282 reconciler_common.go:299] "Volume detached for volume \"pvc-e295a5be-6b2a-44dc-ad57-44e500ee0779\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^fd92f65e-c8e3-11f0-9c36-023893081759\") on node \"addons-153780\" DevicePath \"\""
	Nov 24 03:16:53 addons-153780 kubelet[1282]: I1124 03:16:53.408145    1282 scope.go:117] "RemoveContainer" containerID="9d8a340533682c0939de9bce595af65af06bef8ec65c0252dd702c708161c933"
	Nov 24 03:16:53 addons-153780 kubelet[1282]: I1124 03:16:53.450110    1282 scope.go:117] "RemoveContainer" containerID="9d8a340533682c0939de9bce595af65af06bef8ec65c0252dd702c708161c933"
	Nov 24 03:16:53 addons-153780 kubelet[1282]: E1124 03:16:53.451632    1282 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d8a340533682c0939de9bce595af65af06bef8ec65c0252dd702c708161c933\": container with ID starting with 9d8a340533682c0939de9bce595af65af06bef8ec65c0252dd702c708161c933 not found: ID does not exist" containerID="9d8a340533682c0939de9bce595af65af06bef8ec65c0252dd702c708161c933"
	Nov 24 03:16:53 addons-153780 kubelet[1282]: I1124 03:16:53.451696    1282 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d8a340533682c0939de9bce595af65af06bef8ec65c0252dd702c708161c933"} err="failed to get container status \"9d8a340533682c0939de9bce595af65af06bef8ec65c0252dd702c708161c933\": rpc error: code = NotFound desc = could not find container \"9d8a340533682c0939de9bce595af65af06bef8ec65c0252dd702c708161c933\": container with ID starting with 9d8a340533682c0939de9bce595af65af06bef8ec65c0252dd702c708161c933 not found: ID does not exist"
	Nov 24 03:16:53 addons-153780 kubelet[1282]: I1124 03:16:53.581881    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/f7503d49-6338-43aa-a55a-0ae6108a9697-script\") pod \"f7503d49-6338-43aa-a55a-0ae6108a9697\" (UID: \"f7503d49-6338-43aa-a55a-0ae6108a9697\") "
	Nov 24 03:16:53 addons-153780 kubelet[1282]: I1124 03:16:53.581935    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f7503d49-6338-43aa-a55a-0ae6108a9697-gcp-creds\") pod \"f7503d49-6338-43aa-a55a-0ae6108a9697\" (UID: \"f7503d49-6338-43aa-a55a-0ae6108a9697\") "
	Nov 24 03:16:53 addons-153780 kubelet[1282]: I1124 03:16:53.581976    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccczk\" (UniqueName: \"kubernetes.io/projected/f7503d49-6338-43aa-a55a-0ae6108a9697-kube-api-access-ccczk\") pod \"f7503d49-6338-43aa-a55a-0ae6108a9697\" (UID: \"f7503d49-6338-43aa-a55a-0ae6108a9697\") "
	Nov 24 03:16:53 addons-153780 kubelet[1282]: I1124 03:16:53.582013    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/f7503d49-6338-43aa-a55a-0ae6108a9697-data\") pod \"f7503d49-6338-43aa-a55a-0ae6108a9697\" (UID: \"f7503d49-6338-43aa-a55a-0ae6108a9697\") "
	Nov 24 03:16:53 addons-153780 kubelet[1282]: I1124 03:16:53.582149    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7503d49-6338-43aa-a55a-0ae6108a9697-data" (OuterVolumeSpecName: "data") pod "f7503d49-6338-43aa-a55a-0ae6108a9697" (UID: "f7503d49-6338-43aa-a55a-0ae6108a9697"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 24 03:16:53 addons-153780 kubelet[1282]: I1124 03:16:53.582518    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7503d49-6338-43aa-a55a-0ae6108a9697-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "f7503d49-6338-43aa-a55a-0ae6108a9697" (UID: "f7503d49-6338-43aa-a55a-0ae6108a9697"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 24 03:16:53 addons-153780 kubelet[1282]: I1124 03:16:53.582817    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7503d49-6338-43aa-a55a-0ae6108a9697-script" (OuterVolumeSpecName: "script") pod "f7503d49-6338-43aa-a55a-0ae6108a9697" (UID: "f7503d49-6338-43aa-a55a-0ae6108a9697"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Nov 24 03:16:53 addons-153780 kubelet[1282]: I1124 03:16:53.584706    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7503d49-6338-43aa-a55a-0ae6108a9697-kube-api-access-ccczk" (OuterVolumeSpecName: "kube-api-access-ccczk") pod "f7503d49-6338-43aa-a55a-0ae6108a9697" (UID: "f7503d49-6338-43aa-a55a-0ae6108a9697"). InnerVolumeSpecName "kube-api-access-ccczk". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 24 03:16:53 addons-153780 kubelet[1282]: I1124 03:16:53.682825    1282 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/f7503d49-6338-43aa-a55a-0ae6108a9697-script\") on node \"addons-153780\" DevicePath \"\""
	Nov 24 03:16:53 addons-153780 kubelet[1282]: I1124 03:16:53.682871    1282 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f7503d49-6338-43aa-a55a-0ae6108a9697-gcp-creds\") on node \"addons-153780\" DevicePath \"\""
	Nov 24 03:16:53 addons-153780 kubelet[1282]: I1124 03:16:53.682884    1282 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ccczk\" (UniqueName: \"kubernetes.io/projected/f7503d49-6338-43aa-a55a-0ae6108a9697-kube-api-access-ccczk\") on node \"addons-153780\" DevicePath \"\""
	Nov 24 03:16:53 addons-153780 kubelet[1282]: I1124 03:16:53.682895    1282 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/f7503d49-6338-43aa-a55a-0ae6108a9697-data\") on node \"addons-153780\" DevicePath \"\""
	Nov 24 03:16:54 addons-153780 kubelet[1282]: I1124 03:16:54.414684    1282 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51cd81c2170cfd7c953e644ccbf5df5e10d9d5400ac41e5221efb237636ca445"
	Nov 24 03:16:54 addons-153780 kubelet[1282]: E1124 03:16:54.416797    1282 status_manager.go:1018] "Failed to get status for pod" err="pods \"helper-pod-delete-pvc-eefc238c-13a7-4139-bcbc-502e91e6b046\" is forbidden: User \"system:node:addons-153780\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-153780' and this object" podUID="f7503d49-6338-43aa-a55a-0ae6108a9697" pod="local-path-storage/helper-pod-delete-pvc-eefc238c-13a7-4139-bcbc-502e91e6b046"
	Nov 24 03:16:54 addons-153780 kubelet[1282]: I1124 03:16:54.456090    1282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e35b203-63ec-4a61-862e-f5027e1bf54d" path="/var/lib/kubelet/pods/3e35b203-63ec-4a61-862e-f5027e1bf54d/volumes"
	Nov 24 03:16:54 addons-153780 kubelet[1282]: I1124 03:16:54.456464    1282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7503d49-6338-43aa-a55a-0ae6108a9697" path="/var/lib/kubelet/pods/f7503d49-6338-43aa-a55a-0ae6108a9697/volumes"
	
	
	==> storage-provisioner [83cadb364a123d7398e5dd2800753acc7d78782c7f5f032824ec9d6dff61065d] <==
	W1124 03:16:34.861236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:36.864398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:36.872082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:38.875512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:38.879960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:40.883054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:40.887719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:42.894154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:42.900037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:44.903870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:44.913364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:46.916346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:46.923071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:48.926858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:48.932188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:50.935163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:50.940327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:52.943909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:52.948312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:54.953214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:54.960527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:56.966552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:56.977639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:58.983038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:16:58.990133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-153780 -n addons-153780
helpers_test.go:269: (dbg) Run:  kubectl --context addons-153780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-bjhlt ingress-nginx-admission-patch-gn8kb registry-creds-764b6fb674-bk79n
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-153780 describe pod ingress-nginx-admission-create-bjhlt ingress-nginx-admission-patch-gn8kb registry-creds-764b6fb674-bk79n
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-153780 describe pod ingress-nginx-admission-create-bjhlt ingress-nginx-admission-patch-gn8kb registry-creds-764b6fb674-bk79n: exit status 1 (170.338951ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-bjhlt" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-gn8kb" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-bk79n" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-153780 describe pod ingress-nginx-admission-create-bjhlt ingress-nginx-admission-patch-gn8kb registry-creds-764b6fb674-bk79n: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-153780 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-153780 addons disable headlamp --alsologtostderr -v=1: exit status 11 (764.917183ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:17:00.842983  300005 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:17:00.853898  300005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:17:00.853978  300005 out.go:374] Setting ErrFile to fd 2...
	I1124 03:17:00.854003  300005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:17:00.854376  300005 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 03:17:00.854818  300005 mustload.go:66] Loading cluster: addons-153780
	I1124 03:17:00.855375  300005 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:17:00.855472  300005 addons.go:622] checking whether the cluster is paused
	I1124 03:17:00.855671  300005 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:17:00.855722  300005 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:17:00.856412  300005 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:17:00.910944  300005 ssh_runner.go:195] Run: systemctl --version
	I1124 03:17:00.911018  300005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:17:00.962868  300005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:17:01.111811  300005 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:17:01.111929  300005 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:17:01.291271  300005 cri.go:89] found id: "3485678af5d19bae1d4411bec346a97566826570b9acca73e66fcd515e5dbfdb"
	I1124 03:17:01.291295  300005 cri.go:89] found id: "7548077813b01fb75c6d1c88996c76b7faee2fe33dde991c58f565ccb1427ad5"
	I1124 03:17:01.291300  300005 cri.go:89] found id: "8afdc0c7272e751be59501860afc29e8a7a532529fda38a20c97838336ba67b7"
	I1124 03:17:01.291310  300005 cri.go:89] found id: "29d02ce914d44a4b8329512d1a23e85804b768d884d99b075e34cf6e14dac788"
	I1124 03:17:01.291313  300005 cri.go:89] found id: "9af2707445bb441233c326d9e3bf3966ba13a26a89b7254b24679f387b0b5bc0"
	I1124 03:17:01.291317  300005 cri.go:89] found id: "f1836e8795e70fdafa512b9e9d8802dc1dd32e3cbeed9fc49f65e08646f91bfc"
	I1124 03:17:01.291324  300005 cri.go:89] found id: "bcef0ab7ff5ee5c5f49176f121fab57276a8e6c2d6f1ad04f8043b241c47d281"
	I1124 03:17:01.291328  300005 cri.go:89] found id: "87ca81f329f9944d4aa8992fd39e8df84ad075a50b258501c5b36393d9b8490f"
	I1124 03:17:01.291331  300005 cri.go:89] found id: "e055a401fe670aa2c020065d05570646e820163cf83b991f8ce8c75cac7a531b"
	I1124 03:17:01.291338  300005 cri.go:89] found id: "40df276efacb02cb1f87ef6c0cd3435fa8719cfd11c9a1fcd645347d1bbf431d"
	I1124 03:17:01.291342  300005 cri.go:89] found id: "99c706b4665c8c75b900e2aafef4cb98d48a526abe0c79cf85a0c92d29c9bc8e"
	I1124 03:17:01.291345  300005 cri.go:89] found id: "e9e5ba99ab47bcb26e5d61dc6a04f4f4259bdc62676c462b2e122037b199d6dd"
	I1124 03:17:01.291349  300005 cri.go:89] found id: "4bf7144c7e3cbfd4ce77529d316b328c7ebd75692d7023e192f903097f99167f"
	I1124 03:17:01.291352  300005 cri.go:89] found id: "e0d73582da9fb4ef7041d1e171499fdca30fab4e8fa2887ad289666b528e8c73"
	I1124 03:17:01.291356  300005 cri.go:89] found id: "d731175eced009ea8d73a26124f854679fe6a04beaf289f7a3a7635ccfa7155a"
	I1124 03:17:01.291362  300005 cri.go:89] found id: "83cadb364a123d7398e5dd2800753acc7d78782c7f5f032824ec9d6dff61065d"
	I1124 03:17:01.291366  300005 cri.go:89] found id: "122b5b3da819c62b28e08a0e8f492efb939d921f5cc87f9f5dd30d4354f4c824"
	I1124 03:17:01.291370  300005 cri.go:89] found id: "8549937bedbf1ccf4f98e083e6dcbaa5697df18822718fd0d27b79e6e76f8942"
	I1124 03:17:01.291374  300005 cri.go:89] found id: "243940847a312c0327ab19daf598258d6deca45c2a27efb8e89e875e0d87c684"
	I1124 03:17:01.291377  300005 cri.go:89] found id: "78b45913b9ad783248086bdaf8ec66bb8abed626933418187bc0d43337f720ce"
	I1124 03:17:01.291382  300005 cri.go:89] found id: "338b16a84542c2f340889c798911ea671b465c1749548d747c2fff6223b15a89"
	I1124 03:17:01.291385  300005 cri.go:89] found id: "9d6da0d20171f4ca6805881f1293dd5146dfe8c43773853626d869062a6287b6"
	I1124 03:17:01.291388  300005 cri.go:89] found id: "44d60dc8fd9beb0e354e25ab604f6137163390956f8c04bf67e790db61625fb4"
	I1124 03:17:01.291391  300005 cri.go:89] found id: ""
	I1124 03:17:01.291459  300005 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:17:01.354052  300005 out.go:203] 
	W1124 03:17:01.356952  300005 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:17:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:17:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 03:17:01.356980  300005 out.go:285] * 
	* 
	W1124 03:17:01.376532  300005 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 03:17:01.400637  300005 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-153780 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (4.23s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-gp9qf" [32ddb6c9-ce0d-4d61-834f-d55282d96e7c] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004382945s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-153780 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-153780 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (272.350718ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:16:57.010717  299481 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:16:57.012076  299481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:16:57.012095  299481 out.go:374] Setting ErrFile to fd 2...
	I1124 03:16:57.012102  299481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:16:57.012420  299481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 03:16:57.012744  299481 mustload.go:66] Loading cluster: addons-153780
	I1124 03:16:57.013136  299481 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:16:57.013156  299481 addons.go:622] checking whether the cluster is paused
	I1124 03:16:57.013265  299481 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:16:57.013286  299481 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:16:57.013881  299481 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:16:57.035985  299481 ssh_runner.go:195] Run: systemctl --version
	I1124 03:16:57.036059  299481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:16:57.053186  299481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:16:57.159220  299481 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:16:57.159318  299481 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:16:57.190948  299481 cri.go:89] found id: "3485678af5d19bae1d4411bec346a97566826570b9acca73e66fcd515e5dbfdb"
	I1124 03:16:57.190977  299481 cri.go:89] found id: "7548077813b01fb75c6d1c88996c76b7faee2fe33dde991c58f565ccb1427ad5"
	I1124 03:16:57.190983  299481 cri.go:89] found id: "8afdc0c7272e751be59501860afc29e8a7a532529fda38a20c97838336ba67b7"
	I1124 03:16:57.190986  299481 cri.go:89] found id: "29d02ce914d44a4b8329512d1a23e85804b768d884d99b075e34cf6e14dac788"
	I1124 03:16:57.190990  299481 cri.go:89] found id: "9af2707445bb441233c326d9e3bf3966ba13a26a89b7254b24679f387b0b5bc0"
	I1124 03:16:57.190993  299481 cri.go:89] found id: "f1836e8795e70fdafa512b9e9d8802dc1dd32e3cbeed9fc49f65e08646f91bfc"
	I1124 03:16:57.190996  299481 cri.go:89] found id: "bcef0ab7ff5ee5c5f49176f121fab57276a8e6c2d6f1ad04f8043b241c47d281"
	I1124 03:16:57.190999  299481 cri.go:89] found id: "87ca81f329f9944d4aa8992fd39e8df84ad075a50b258501c5b36393d9b8490f"
	I1124 03:16:57.191002  299481 cri.go:89] found id: "e055a401fe670aa2c020065d05570646e820163cf83b991f8ce8c75cac7a531b"
	I1124 03:16:57.191009  299481 cri.go:89] found id: "40df276efacb02cb1f87ef6c0cd3435fa8719cfd11c9a1fcd645347d1bbf431d"
	I1124 03:16:57.191013  299481 cri.go:89] found id: "99c706b4665c8c75b900e2aafef4cb98d48a526abe0c79cf85a0c92d29c9bc8e"
	I1124 03:16:57.191017  299481 cri.go:89] found id: "e9e5ba99ab47bcb26e5d61dc6a04f4f4259bdc62676c462b2e122037b199d6dd"
	I1124 03:16:57.191020  299481 cri.go:89] found id: "4bf7144c7e3cbfd4ce77529d316b328c7ebd75692d7023e192f903097f99167f"
	I1124 03:16:57.191023  299481 cri.go:89] found id: "e0d73582da9fb4ef7041d1e171499fdca30fab4e8fa2887ad289666b528e8c73"
	I1124 03:16:57.191027  299481 cri.go:89] found id: "d731175eced009ea8d73a26124f854679fe6a04beaf289f7a3a7635ccfa7155a"
	I1124 03:16:57.191033  299481 cri.go:89] found id: "83cadb364a123d7398e5dd2800753acc7d78782c7f5f032824ec9d6dff61065d"
	I1124 03:16:57.191036  299481 cri.go:89] found id: "122b5b3da819c62b28e08a0e8f492efb939d921f5cc87f9f5dd30d4354f4c824"
	I1124 03:16:57.191039  299481 cri.go:89] found id: "8549937bedbf1ccf4f98e083e6dcbaa5697df18822718fd0d27b79e6e76f8942"
	I1124 03:16:57.191042  299481 cri.go:89] found id: "243940847a312c0327ab19daf598258d6deca45c2a27efb8e89e875e0d87c684"
	I1124 03:16:57.191045  299481 cri.go:89] found id: "78b45913b9ad783248086bdaf8ec66bb8abed626933418187bc0d43337f720ce"
	I1124 03:16:57.191050  299481 cri.go:89] found id: "338b16a84542c2f340889c798911ea671b465c1749548d747c2fff6223b15a89"
	I1124 03:16:57.191053  299481 cri.go:89] found id: "9d6da0d20171f4ca6805881f1293dd5146dfe8c43773853626d869062a6287b6"
	I1124 03:16:57.191056  299481 cri.go:89] found id: "44d60dc8fd9beb0e354e25ab604f6137163390956f8c04bf67e790db61625fb4"
	I1124 03:16:57.191059  299481 cri.go:89] found id: ""
	I1124 03:16:57.191110  299481 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:16:57.207099  299481 out.go:203] 
	W1124 03:16:57.209960  299481 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:16:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:16:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 03:16:57.209997  299481 out.go:285] * 
	* 
	W1124 03:16:57.215742  299481 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 03:16:57.218698  299481 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-153780 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.28s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.75s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-153780 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-153780 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153780 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [a3099002-ab96-4815-b9e0-7a801f5891bf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [a3099002-ab96-4815-b9e0-7a801f5891bf] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [a3099002-ab96-4815-b9e0-7a801f5891bf] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003782341s
addons_test.go:967: (dbg) Run:  kubectl --context addons-153780 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-153780 ssh "cat /opt/local-path-provisioner/pvc-eefc238c-13a7-4139-bcbc-502e91e6b046_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-153780 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-153780 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-153780 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-153780 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (396.344531ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:16:51.666950  299292 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:16:51.667889  299292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:16:51.667939  299292 out.go:374] Setting ErrFile to fd 2...
	I1124 03:16:51.667961  299292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:16:51.668310  299292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 03:16:51.668684  299292 mustload.go:66] Loading cluster: addons-153780
	I1124 03:16:51.669135  299292 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:16:51.669175  299292 addons.go:622] checking whether the cluster is paused
	I1124 03:16:51.669326  299292 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:16:51.669358  299292 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:16:51.669924  299292 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:16:51.698118  299292 ssh_runner.go:195] Run: systemctl --version
	I1124 03:16:51.698180  299292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:16:51.716381  299292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:16:51.824975  299292 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:16:51.825064  299292 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:16:51.904032  299292 cri.go:89] found id: "3485678af5d19bae1d4411bec346a97566826570b9acca73e66fcd515e5dbfdb"
	I1124 03:16:51.904055  299292 cri.go:89] found id: "7548077813b01fb75c6d1c88996c76b7faee2fe33dde991c58f565ccb1427ad5"
	I1124 03:16:51.904060  299292 cri.go:89] found id: "8afdc0c7272e751be59501860afc29e8a7a532529fda38a20c97838336ba67b7"
	I1124 03:16:51.904063  299292 cri.go:89] found id: "29d02ce914d44a4b8329512d1a23e85804b768d884d99b075e34cf6e14dac788"
	I1124 03:16:51.904071  299292 cri.go:89] found id: "9af2707445bb441233c326d9e3bf3966ba13a26a89b7254b24679f387b0b5bc0"
	I1124 03:16:51.904075  299292 cri.go:89] found id: "f1836e8795e70fdafa512b9e9d8802dc1dd32e3cbeed9fc49f65e08646f91bfc"
	I1124 03:16:51.904078  299292 cri.go:89] found id: "bcef0ab7ff5ee5c5f49176f121fab57276a8e6c2d6f1ad04f8043b241c47d281"
	I1124 03:16:51.904082  299292 cri.go:89] found id: "87ca81f329f9944d4aa8992fd39e8df84ad075a50b258501c5b36393d9b8490f"
	I1124 03:16:51.904085  299292 cri.go:89] found id: "e055a401fe670aa2c020065d05570646e820163cf83b991f8ce8c75cac7a531b"
	I1124 03:16:51.904091  299292 cri.go:89] found id: "40df276efacb02cb1f87ef6c0cd3435fa8719cfd11c9a1fcd645347d1bbf431d"
	I1124 03:16:51.904094  299292 cri.go:89] found id: "99c706b4665c8c75b900e2aafef4cb98d48a526abe0c79cf85a0c92d29c9bc8e"
	I1124 03:16:51.904098  299292 cri.go:89] found id: "e9e5ba99ab47bcb26e5d61dc6a04f4f4259bdc62676c462b2e122037b199d6dd"
	I1124 03:16:51.904102  299292 cri.go:89] found id: "4bf7144c7e3cbfd4ce77529d316b328c7ebd75692d7023e192f903097f99167f"
	I1124 03:16:51.904105  299292 cri.go:89] found id: "e0d73582da9fb4ef7041d1e171499fdca30fab4e8fa2887ad289666b528e8c73"
	I1124 03:16:51.904108  299292 cri.go:89] found id: "d731175eced009ea8d73a26124f854679fe6a04beaf289f7a3a7635ccfa7155a"
	I1124 03:16:51.904116  299292 cri.go:89] found id: "83cadb364a123d7398e5dd2800753acc7d78782c7f5f032824ec9d6dff61065d"
	I1124 03:16:51.904120  299292 cri.go:89] found id: "122b5b3da819c62b28e08a0e8f492efb939d921f5cc87f9f5dd30d4354f4c824"
	I1124 03:16:51.904125  299292 cri.go:89] found id: "8549937bedbf1ccf4f98e083e6dcbaa5697df18822718fd0d27b79e6e76f8942"
	I1124 03:16:51.904129  299292 cri.go:89] found id: "243940847a312c0327ab19daf598258d6deca45c2a27efb8e89e875e0d87c684"
	I1124 03:16:51.904132  299292 cri.go:89] found id: "78b45913b9ad783248086bdaf8ec66bb8abed626933418187bc0d43337f720ce"
	I1124 03:16:51.904137  299292 cri.go:89] found id: "338b16a84542c2f340889c798911ea671b465c1749548d747c2fff6223b15a89"
	I1124 03:16:51.904143  299292 cri.go:89] found id: "9d6da0d20171f4ca6805881f1293dd5146dfe8c43773853626d869062a6287b6"
	I1124 03:16:51.904146  299292 cri.go:89] found id: "44d60dc8fd9beb0e354e25ab604f6137163390956f8c04bf67e790db61625fb4"
	I1124 03:16:51.904149  299292 cri.go:89] found id: ""
	I1124 03:16:51.904199  299292 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:16:51.923749  299292 out.go:203] 
	W1124 03:16:51.926801  299292 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:16:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:16:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 03:16:51.926834  299292 out.go:285] * 
	* 
	W1124 03:16:51.932359  299292 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 03:16:51.936074  299292 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-153780 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.75s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.36s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-j7cvq" [3405d820-d287-4751-a138-a2c64aaf6375] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004186397s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-153780 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-153780 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (359.324931ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:16:41.911411  298855 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:16:41.913298  298855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:16:41.913317  298855 out.go:374] Setting ErrFile to fd 2...
	I1124 03:16:41.913324  298855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:16:41.913636  298855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 03:16:41.913944  298855 mustload.go:66] Loading cluster: addons-153780
	I1124 03:16:41.914353  298855 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:16:41.914364  298855 addons.go:622] checking whether the cluster is paused
	I1124 03:16:41.914487  298855 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:16:41.914498  298855 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:16:41.915000  298855 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:16:41.945527  298855 ssh_runner.go:195] Run: systemctl --version
	I1124 03:16:41.945594  298855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:16:41.968798  298855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:16:42.083113  298855 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:16:42.083220  298855 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:16:42.128860  298855 cri.go:89] found id: "3485678af5d19bae1d4411bec346a97566826570b9acca73e66fcd515e5dbfdb"
	I1124 03:16:42.128889  298855 cri.go:89] found id: "7548077813b01fb75c6d1c88996c76b7faee2fe33dde991c58f565ccb1427ad5"
	I1124 03:16:42.128900  298855 cri.go:89] found id: "8afdc0c7272e751be59501860afc29e8a7a532529fda38a20c97838336ba67b7"
	I1124 03:16:42.128904  298855 cri.go:89] found id: "29d02ce914d44a4b8329512d1a23e85804b768d884d99b075e34cf6e14dac788"
	I1124 03:16:42.128908  298855 cri.go:89] found id: "9af2707445bb441233c326d9e3bf3966ba13a26a89b7254b24679f387b0b5bc0"
	I1124 03:16:42.128917  298855 cri.go:89] found id: "f1836e8795e70fdafa512b9e9d8802dc1dd32e3cbeed9fc49f65e08646f91bfc"
	I1124 03:16:42.128923  298855 cri.go:89] found id: "bcef0ab7ff5ee5c5f49176f121fab57276a8e6c2d6f1ad04f8043b241c47d281"
	I1124 03:16:42.128926  298855 cri.go:89] found id: "87ca81f329f9944d4aa8992fd39e8df84ad075a50b258501c5b36393d9b8490f"
	I1124 03:16:42.128930  298855 cri.go:89] found id: "e055a401fe670aa2c020065d05570646e820163cf83b991f8ce8c75cac7a531b"
	I1124 03:16:42.128937  298855 cri.go:89] found id: "40df276efacb02cb1f87ef6c0cd3435fa8719cfd11c9a1fcd645347d1bbf431d"
	I1124 03:16:42.128941  298855 cri.go:89] found id: "99c706b4665c8c75b900e2aafef4cb98d48a526abe0c79cf85a0c92d29c9bc8e"
	I1124 03:16:42.128946  298855 cri.go:89] found id: "e9e5ba99ab47bcb26e5d61dc6a04f4f4259bdc62676c462b2e122037b199d6dd"
	I1124 03:16:42.128957  298855 cri.go:89] found id: "4bf7144c7e3cbfd4ce77529d316b328c7ebd75692d7023e192f903097f99167f"
	I1124 03:16:42.128960  298855 cri.go:89] found id: "e0d73582da9fb4ef7041d1e171499fdca30fab4e8fa2887ad289666b528e8c73"
	I1124 03:16:42.128963  298855 cri.go:89] found id: "d731175eced009ea8d73a26124f854679fe6a04beaf289f7a3a7635ccfa7155a"
	I1124 03:16:42.128969  298855 cri.go:89] found id: "83cadb364a123d7398e5dd2800753acc7d78782c7f5f032824ec9d6dff61065d"
	I1124 03:16:42.128978  298855 cri.go:89] found id: "122b5b3da819c62b28e08a0e8f492efb939d921f5cc87f9f5dd30d4354f4c824"
	I1124 03:16:42.128982  298855 cri.go:89] found id: "8549937bedbf1ccf4f98e083e6dcbaa5697df18822718fd0d27b79e6e76f8942"
	I1124 03:16:42.128985  298855 cri.go:89] found id: "243940847a312c0327ab19daf598258d6deca45c2a27efb8e89e875e0d87c684"
	I1124 03:16:42.128988  298855 cri.go:89] found id: "78b45913b9ad783248086bdaf8ec66bb8abed626933418187bc0d43337f720ce"
	I1124 03:16:42.128993  298855 cri.go:89] found id: "338b16a84542c2f340889c798911ea671b465c1749548d747c2fff6223b15a89"
	I1124 03:16:42.128996  298855 cri.go:89] found id: "9d6da0d20171f4ca6805881f1293dd5146dfe8c43773853626d869062a6287b6"
	I1124 03:16:42.128999  298855 cri.go:89] found id: "44d60dc8fd9beb0e354e25ab604f6137163390956f8c04bf67e790db61625fb4"
	I1124 03:16:42.129002  298855 cri.go:89] found id: ""
	I1124 03:16:42.129065  298855 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:16:42.167752  298855 out.go:203] 
	W1124 03:16:42.172609  298855 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:16:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:16:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 03:16:42.172643  298855 out.go:285] * 
	* 
	W1124 03:16:42.180861  298855 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 03:16:42.185368  298855 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-153780 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.36s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-t6r26" [25e64816-f833-4d0d-a40b-5255562ee53f] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005521025s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-153780 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-153780 addons disable yakd --alsologtostderr -v=1: exit status 11 (251.968246ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:16:25.668645  298439 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:16:25.669533  298439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:16:25.669585  298439 out.go:374] Setting ErrFile to fd 2...
	I1124 03:16:25.669608  298439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:16:25.670030  298439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 03:16:25.670441  298439 mustload.go:66] Loading cluster: addons-153780
	I1124 03:16:25.671485  298439 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:16:25.671517  298439 addons.go:622] checking whether the cluster is paused
	I1124 03:16:25.671693  298439 config.go:182] Loaded profile config "addons-153780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:16:25.671713  298439 host.go:66] Checking if "addons-153780" exists ...
	I1124 03:16:25.672342  298439 cli_runner.go:164] Run: docker container inspect addons-153780 --format={{.State.Status}}
	I1124 03:16:25.690084  298439 ssh_runner.go:195] Run: systemctl --version
	I1124 03:16:25.690144  298439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-153780
	I1124 03:16:25.709002  298439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/addons-153780/id_rsa Username:docker}
	I1124 03:16:25.813164  298439 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:16:25.813259  298439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:16:25.844436  298439 cri.go:89] found id: "3485678af5d19bae1d4411bec346a97566826570b9acca73e66fcd515e5dbfdb"
	I1124 03:16:25.844460  298439 cri.go:89] found id: "7548077813b01fb75c6d1c88996c76b7faee2fe33dde991c58f565ccb1427ad5"
	I1124 03:16:25.844465  298439 cri.go:89] found id: "8afdc0c7272e751be59501860afc29e8a7a532529fda38a20c97838336ba67b7"
	I1124 03:16:25.844469  298439 cri.go:89] found id: "29d02ce914d44a4b8329512d1a23e85804b768d884d99b075e34cf6e14dac788"
	I1124 03:16:25.844472  298439 cri.go:89] found id: "9af2707445bb441233c326d9e3bf3966ba13a26a89b7254b24679f387b0b5bc0"
	I1124 03:16:25.844476  298439 cri.go:89] found id: "f1836e8795e70fdafa512b9e9d8802dc1dd32e3cbeed9fc49f65e08646f91bfc"
	I1124 03:16:25.844479  298439 cri.go:89] found id: "bcef0ab7ff5ee5c5f49176f121fab57276a8e6c2d6f1ad04f8043b241c47d281"
	I1124 03:16:25.844482  298439 cri.go:89] found id: "87ca81f329f9944d4aa8992fd39e8df84ad075a50b258501c5b36393d9b8490f"
	I1124 03:16:25.844509  298439 cri.go:89] found id: "e055a401fe670aa2c020065d05570646e820163cf83b991f8ce8c75cac7a531b"
	I1124 03:16:25.844526  298439 cri.go:89] found id: "40df276efacb02cb1f87ef6c0cd3435fa8719cfd11c9a1fcd645347d1bbf431d"
	I1124 03:16:25.844538  298439 cri.go:89] found id: "99c706b4665c8c75b900e2aafef4cb98d48a526abe0c79cf85a0c92d29c9bc8e"
	I1124 03:16:25.844542  298439 cri.go:89] found id: "e9e5ba99ab47bcb26e5d61dc6a04f4f4259bdc62676c462b2e122037b199d6dd"
	I1124 03:16:25.844545  298439 cri.go:89] found id: "4bf7144c7e3cbfd4ce77529d316b328c7ebd75692d7023e192f903097f99167f"
	I1124 03:16:25.844548  298439 cri.go:89] found id: "e0d73582da9fb4ef7041d1e171499fdca30fab4e8fa2887ad289666b528e8c73"
	I1124 03:16:25.844551  298439 cri.go:89] found id: "d731175eced009ea8d73a26124f854679fe6a04beaf289f7a3a7635ccfa7155a"
	I1124 03:16:25.844557  298439 cri.go:89] found id: "83cadb364a123d7398e5dd2800753acc7d78782c7f5f032824ec9d6dff61065d"
	I1124 03:16:25.844563  298439 cri.go:89] found id: "122b5b3da819c62b28e08a0e8f492efb939d921f5cc87f9f5dd30d4354f4c824"
	I1124 03:16:25.844568  298439 cri.go:89] found id: "8549937bedbf1ccf4f98e083e6dcbaa5697df18822718fd0d27b79e6e76f8942"
	I1124 03:16:25.844571  298439 cri.go:89] found id: "243940847a312c0327ab19daf598258d6deca45c2a27efb8e89e875e0d87c684"
	I1124 03:16:25.844589  298439 cri.go:89] found id: "78b45913b9ad783248086bdaf8ec66bb8abed626933418187bc0d43337f720ce"
	I1124 03:16:25.844596  298439 cri.go:89] found id: "338b16a84542c2f340889c798911ea671b465c1749548d747c2fff6223b15a89"
	I1124 03:16:25.844602  298439 cri.go:89] found id: "9d6da0d20171f4ca6805881f1293dd5146dfe8c43773853626d869062a6287b6"
	I1124 03:16:25.844605  298439 cri.go:89] found id: "44d60dc8fd9beb0e354e25ab604f6137163390956f8c04bf67e790db61625fb4"
	I1124 03:16:25.844608  298439 cri.go:89] found id: ""
	I1124 03:16:25.844669  298439 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:16:25.859825  298439 out.go:203] 
	W1124 03:16:25.862713  298439 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:16:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:16:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 03:16:25.862738  298439 out.go:285] * 
	* 
	W1124 03:16:25.868336  298439 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 03:16:25.871282  298439 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-153780 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-666975 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-666975 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-q2t7f" [2bb3d8fc-cd52-41c7-b84b-7a9839e16a3f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1124 03:26:09.841839  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:26:37.552918  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:31:09.841855  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-666975 -n functional-666975
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-24 03:34:01.301470438 +0000 UTC m=+1295.598408218
functional_test.go:1645: (dbg) Run:  kubectl --context functional-666975 describe po hello-node-connect-7d85dfc575-q2t7f -n default
functional_test.go:1645: (dbg) kubectl --context functional-666975 describe po hello-node-connect-7d85dfc575-q2t7f -n default:
Name:             hello-node-connect-7d85dfc575-q2t7f
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-666975/192.168.49.2
Start Time:       Mon, 24 Nov 2025 03:24:00 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-grwmv (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-grwmv:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-q2t7f to functional-666975
Normal   Pulling    7m11s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m11s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m11s (x5 over 10m)     kubelet            Error: ErrImagePull
Normal   BackOff    4m50s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m50s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-666975 logs hello-node-connect-7d85dfc575-q2t7f -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-666975 logs hello-node-connect-7d85dfc575-q2t7f -n default: exit status 1 (97.244926ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-q2t7f" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-666975 logs hello-node-connect-7d85dfc575-q2t7f -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-666975 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-q2t7f
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-666975/192.168.49.2
Start Time:       Mon, 24 Nov 2025 03:24:00 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-grwmv (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-grwmv:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-q2t7f to functional-666975
Normal   Pulling    7m11s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m11s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m11s (x5 over 10m)     kubelet            Error: ErrImagePull
Normal   BackOff    4m50s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m50s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-666975 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-666975 logs -l app=hello-node-connect: exit status 1 (95.267881ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-q2t7f" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-666975 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-666975 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.106.100.64
IPs:                      10.106.100.64
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32468/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-666975
helpers_test.go:243: (dbg) docker inspect functional-666975:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e7087edd018071d735ca2000a0809b8172c249f2081d7d570329c2ea300766b",
	        "Created": "2025-11-24T03:20:54.06349795Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 307324,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:20:54.128803021Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fbb44bc62521f331457dff002aaa5e1e27856f9e53853b3b3ee62969be454028",
	        "ResolvConfPath": "/var/lib/docker/containers/5e7087edd018071d735ca2000a0809b8172c249f2081d7d570329c2ea300766b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e7087edd018071d735ca2000a0809b8172c249f2081d7d570329c2ea300766b/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e7087edd018071d735ca2000a0809b8172c249f2081d7d570329c2ea300766b/hosts",
	        "LogPath": "/var/lib/docker/containers/5e7087edd018071d735ca2000a0809b8172c249f2081d7d570329c2ea300766b/5e7087edd018071d735ca2000a0809b8172c249f2081d7d570329c2ea300766b-json.log",
	        "Name": "/functional-666975",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-666975:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-666975",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5e7087edd018071d735ca2000a0809b8172c249f2081d7d570329c2ea300766b",
	                "LowerDir": "/var/lib/docker/overlay2/439ed29d45a8a209ff07b167684cfcc97fc5cce1390a0d12b3f106a6eea01408-init/diff:/var/lib/docker/overlay2/d0aa28be488ed1454a92ae6ce1d851cbc3e880fd8a322d63788b1835a346ec13/diff",
	                "MergedDir": "/var/lib/docker/overlay2/439ed29d45a8a209ff07b167684cfcc97fc5cce1390a0d12b3f106a6eea01408/merged",
	                "UpperDir": "/var/lib/docker/overlay2/439ed29d45a8a209ff07b167684cfcc97fc5cce1390a0d12b3f106a6eea01408/diff",
	                "WorkDir": "/var/lib/docker/overlay2/439ed29d45a8a209ff07b167684cfcc97fc5cce1390a0d12b3f106a6eea01408/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-666975",
	                "Source": "/var/lib/docker/volumes/functional-666975/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-666975",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-666975",
	                "name.minikube.sigs.k8s.io": "functional-666975",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "496dccebdae5e5cfb350eab5fbfcafbefa9b3d6a16de25aa0af22c5244d26d68",
	            "SandboxKey": "/var/run/docker/netns/496dccebdae5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33153"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-666975": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:59:93:8f:b1:b9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2b03c4fa4123ca4a041a5beae3798d8868bc5838cab62fdddc6afd8ef5806182",
	                    "EndpointID": "376f420fc0cf2076533ad6056ed66fab13f8db46025813e76fca9b4dd8291c76",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-666975",
	                        "5e7087edd018"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-666975 -n functional-666975
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-666975 logs -n 25: (1.537195509s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-666975 ssh findmnt -T /mount-9p | grep 9p                                                               │ functional-666975 │ jenkins │ v1.37.0 │ 24 Nov 25 03:33 UTC │ 24 Nov 25 03:33 UTC │
	│ ssh            │ functional-666975 ssh -- ls -la /mount-9p                                                                          │ functional-666975 │ jenkins │ v1.37.0 │ 24 Nov 25 03:33 UTC │ 24 Nov 25 03:33 UTC │
	│ ssh            │ functional-666975 ssh sudo umount -f /mount-9p                                                                     │ functional-666975 │ jenkins │ v1.37.0 │ 24 Nov 25 03:33 UTC │                     │
	│ mount          │ -p functional-666975 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1827520769/001:/mount1 --alsologtostderr -v=1 │ functional-666975 │ jenkins │ v1.37.0 │ 24 Nov 25 03:33 UTC │                     │
	│ ssh            │ functional-666975 ssh findmnt -T /mount1                                                                           │ functional-666975 │ jenkins │ v1.37.0 │ 24 Nov 25 03:33 UTC │                     │
	│ mount          │ -p functional-666975 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1827520769/001:/mount2 --alsologtostderr -v=1 │ functional-666975 │ jenkins │ v1.37.0 │ 24 Nov 25 03:33 UTC │                     │
	│ mount          │ -p functional-666975 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1827520769/001:/mount3 --alsologtostderr -v=1 │ functional-666975 │ jenkins │ v1.37.0 │ 24 Nov 25 03:33 UTC │                     │
	│ ssh            │ functional-666975 ssh findmnt -T /mount1                                                                           │ functional-666975 │ jenkins │ v1.37.0 │ 24 Nov 25 03:33 UTC │ 24 Nov 25 03:33 UTC │
	│ ssh            │ functional-666975 ssh findmnt -T /mount2                                                                           │ functional-666975 │ jenkins │ v1.37.0 │ 24 Nov 25 03:33 UTC │ 24 Nov 25 03:33 UTC │
	│ ssh            │ functional-666975 ssh findmnt -T /mount3                                                                           │ functional-666975 │ jenkins │ v1.37.0 │ 24 Nov 25 03:33 UTC │ 24 Nov 25 03:33 UTC │
	│ mount          │ -p functional-666975 --kill=true                                                                                   │ functional-666975 │ jenkins │ v1.37.0 │ 24 Nov 25 03:33 UTC │                     │
	│ start          │ -p functional-666975 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-666975 │ jenkins │ v1.37.0 │ 24 Nov 25 03:33 UTC │                     │
	│ start          │ -p functional-666975 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ functional-666975 │ jenkins │ v1.37.0 │ 24 Nov 25 03:33 UTC │                     │
	│ start          │ -p functional-666975 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-666975 │ jenkins │ v1.37.0 │ 24 Nov 25 03:33 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-666975 --alsologtostderr -v=1                                                     │ functional-666975 │ jenkins │ v1.37.0 │ 24 Nov 25 03:33 UTC │ 24 Nov 25 03:33 UTC │
	│ update-context │ functional-666975 update-context --alsologtostderr -v=2                                                            │ functional-666975 │ jenkins │ v1.37.0 │ 24 Nov 25 03:33 UTC │ 24 Nov 25 03:33 UTC │
	│ update-context │ functional-666975 update-context --alsologtostderr -v=2                                                            │ functional-666975 │ jenkins │ v1.37.0 │ 24 Nov 25 03:33 UTC │ 24 Nov 25 03:33 UTC │
	│ update-context │ functional-666975 update-context --alsologtostderr -v=2                                                            │ functional-666975 │ jenkins │ v1.37.0 │ 24 Nov 25 03:33 UTC │ 24 Nov 25 03:33 UTC │
	│ image          │ functional-666975 image ls --format short --alsologtostderr                                                        │ functional-666975 │ jenkins │ v1.37.0 │ 24 Nov 25 03:33 UTC │ 24 Nov 25 03:33 UTC │
	│ image          │ functional-666975 image ls --format yaml --alsologtostderr                                                         │ functional-666975 │ jenkins │ v1.37.0 │ 24 Nov 25 03:33 UTC │ 24 Nov 25 03:33 UTC │
	│ ssh            │ functional-666975 ssh pgrep buildkitd                                                                              │ functional-666975 │ jenkins │ v1.37.0 │ 24 Nov 25 03:33 UTC │                     │
	│ image          │ functional-666975 image build -t localhost/my-image:functional-666975 testdata/build --alsologtostderr             │ functional-666975 │ jenkins │ v1.37.0 │ 24 Nov 25 03:33 UTC │ 24 Nov 25 03:33 UTC │
	│ image          │ functional-666975 image ls                                                                                         │ functional-666975 │ jenkins │ v1.37.0 │ 24 Nov 25 03:33 UTC │ 24 Nov 25 03:33 UTC │
	│ image          │ functional-666975 image ls --format json --alsologtostderr                                                         │ functional-666975 │ jenkins │ v1.37.0 │ 24 Nov 25 03:33 UTC │ 24 Nov 25 03:33 UTC │
	│ image          │ functional-666975 image ls --format table --alsologtostderr                                                        │ functional-666975 │ jenkins │ v1.37.0 │ 24 Nov 25 03:33 UTC │ 24 Nov 25 03:33 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:33:43
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:33:43.172665  319065 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:33:43.172808  319065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:33:43.172820  319065 out.go:374] Setting ErrFile to fd 2...
	I1124 03:33:43.172825  319065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:33:43.173205  319065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 03:33:43.173647  319065 out.go:368] Setting JSON to false
	I1124 03:33:43.174584  319065 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8153,"bootTime":1763947071,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 03:33:43.174664  319065 start.go:143] virtualization:  
	I1124 03:33:43.177734  319065 out.go:179] * [functional-666975] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1124 03:33:43.181388  319065 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:33:43.181501  319065 notify.go:221] Checking for updates...
	I1124 03:33:43.187224  319065 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:33:43.190103  319065 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 03:33:43.192852  319065 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	I1124 03:33:43.195667  319065 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 03:33:43.198567  319065 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:33:43.201868  319065 config.go:182] Loaded profile config "functional-666975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:33:43.202436  319065 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:33:43.234532  319065 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 03:33:43.234640  319065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:33:43.296589  319065 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 03:33:43.287291778 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:33:43.296704  319065 docker.go:319] overlay module found
	I1124 03:33:43.299923  319065 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1124 03:33:43.302838  319065 start.go:309] selected driver: docker
	I1124 03:33:43.302859  319065 start.go:927] validating driver "docker" against &{Name:functional-666975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-666975 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:33:43.302958  319065 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:33:43.306576  319065 out.go:203] 
	W1124 03:33:43.309375  319065 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1124 03:33:43.312152  319065 out.go:203] 
	
	
	==> CRI-O <==
	Nov 24 03:33:48 functional-666975 crio[3527]: time="2025-11-24T03:33:48.998968878Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf" id=8e182623-9acc-4f7b-8705-d355d6059e24 name=/runtime.v1.ImageService/PullImage
	Nov 24 03:33:49 functional-666975 crio[3527]: time="2025-11-24T03:33:49.000690313Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=286e45dd-9077-42c2-aff5-2a1c51d50370 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:33:49 functional-666975 crio[3527]: time="2025-11-24T03:33:49.003145219Z" level=info msg="Pulling image: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=216c0e85-5029-40c7-b224-c53b65dd0154 name=/runtime.v1.ImageService/PullImage
	Nov 24 03:33:49 functional-666975 crio[3527]: time="2025-11-24T03:33:49.003796088Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=d7631439-c8a8-484c-8032-e57ce375ec0a name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:33:49 functional-666975 crio[3527]: time="2025-11-24T03:33:49.004438801Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Nov 24 03:33:49 functional-666975 crio[3527]: time="2025-11-24T03:33:49.011444115Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8xnss/kubernetes-dashboard" id=067396fa-39eb-417d-92e2-8a1e5ec5507a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:33:49 functional-666975 crio[3527]: time="2025-11-24T03:33:49.011588715Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:33:49 functional-666975 crio[3527]: time="2025-11-24T03:33:49.017251337Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:33:49 functional-666975 crio[3527]: time="2025-11-24T03:33:49.017805279Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/94d08aa322e3ce8e7cdf9b2c2346f82b960dd8014d31873259a06fc34e735154/merged/etc/group: no such file or directory"
	Nov 24 03:33:49 functional-666975 crio[3527]: time="2025-11-24T03:33:49.01831035Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:33:49 functional-666975 crio[3527]: time="2025-11-24T03:33:49.034756313Z" level=info msg="Created container d2350540d69c87ed0ba32abb1498fe38b20796b5c674d26e07990bb42db127f6: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8xnss/kubernetes-dashboard" id=067396fa-39eb-417d-92e2-8a1e5ec5507a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:33:49 functional-666975 crio[3527]: time="2025-11-24T03:33:49.036975696Z" level=info msg="Starting container: d2350540d69c87ed0ba32abb1498fe38b20796b5c674d26e07990bb42db127f6" id=2d9d686d-d920-4352-9abc-f5acab32ef81 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:33:49 functional-666975 crio[3527]: time="2025-11-24T03:33:49.039438922Z" level=info msg="Started container" PID=6830 containerID=d2350540d69c87ed0ba32abb1498fe38b20796b5c674d26e07990bb42db127f6 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8xnss/kubernetes-dashboard id=2d9d686d-d920-4352-9abc-f5acab32ef81 name=/runtime.v1.RuntimeService/StartContainer sandboxID=52aebe58a456e04dec04bde1826f9118e89fe9d01a9237517309ab6e942e4a68
	Nov 24 03:33:49 functional-666975 crio[3527]: time="2025-11-24T03:33:49.251675276Z" level=info msg="Image operating system mismatch: image uses OS \"linux\"+architecture \"amd64\"+\"\", expecting one of \"linux+arm64+\\\"v8\\\", linux+arm64+\\\"\\\"\""
	Nov 24 03:33:50 functional-666975 crio[3527]: time="2025-11-24T03:33:50.185761852Z" level=info msg="Pulled image: docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a" id=216c0e85-5029-40c7-b224-c53b65dd0154 name=/runtime.v1.ImageService/PullImage
	Nov 24 03:33:50 functional-666975 crio[3527]: time="2025-11-24T03:33:50.186581888Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=f15d2dd6-24ca-425d-bada-132c6922628a name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:33:50 functional-666975 crio[3527]: time="2025-11-24T03:33:50.190698134Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=1cdd40bd-9a14-4bf9-8f95-1cf1b4e9f4d7 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:33:50 functional-666975 crio[3527]: time="2025-11-24T03:33:50.201381552Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-8sddn/dashboard-metrics-scraper" id=98f32770-ee55-4a6d-9eb5-2e1fcdf8706c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:33:50 functional-666975 crio[3527]: time="2025-11-24T03:33:50.201517274Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:33:50 functional-666975 crio[3527]: time="2025-11-24T03:33:50.207707212Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:33:50 functional-666975 crio[3527]: time="2025-11-24T03:33:50.208140996Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ebe8d4c48fe2021cae09ff74789a609325e8acb2b4c622c6e2a54ee03db32234/merged/etc/group: no such file or directory"
	Nov 24 03:33:50 functional-666975 crio[3527]: time="2025-11-24T03:33:50.20885429Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:33:50 functional-666975 crio[3527]: time="2025-11-24T03:33:50.242997813Z" level=info msg="Created container fd071709b8ff0b3f75b08ff9f6f32e3d81c951f6d7464ff3c890a8e16c51af5c: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-8sddn/dashboard-metrics-scraper" id=98f32770-ee55-4a6d-9eb5-2e1fcdf8706c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:33:50 functional-666975 crio[3527]: time="2025-11-24T03:33:50.246370634Z" level=info msg="Starting container: fd071709b8ff0b3f75b08ff9f6f32e3d81c951f6d7464ff3c890a8e16c51af5c" id=ca865290-3f0b-4a09-b164-47e226740c44 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:33:50 functional-666975 crio[3527]: time="2025-11-24T03:33:50.251336611Z" level=info msg="Started container" PID=6873 containerID=fd071709b8ff0b3f75b08ff9f6f32e3d81c951f6d7464ff3c890a8e16c51af5c description=kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-8sddn/dashboard-metrics-scraper id=ca865290-3f0b-4a09-b164-47e226740c44 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7710c4ffdbbe63b214841eb8e9050a872b1d3126a3bd508cd2c80a947a710490
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	fd071709b8ff0       docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a   12 seconds ago      Running             dashboard-metrics-scraper   0                   7710c4ffdbbe6       dashboard-metrics-scraper-77bf4d6c4c-8sddn   kubernetes-dashboard
	d2350540d69c8       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf         13 seconds ago      Running             kubernetes-dashboard        0                   52aebe58a456e       kubernetes-dashboard-855c9754f9-8xnss        kubernetes-dashboard
	bb157daff1add       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e              26 seconds ago      Exited              mount-munger                0                   9392b97b256db       busybox-mount                                default
	cb57dba28cce8       docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712                  10 minutes ago      Running             myfrontend                  0                   40938ae878b47       sp-pod                                       default
	753ab11f2cb3b       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                  10 minutes ago      Running             nginx                       0                   3e0d0d6a4a418       nginx-svc                                    default
	588bef5b1b619       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 11 minutes ago      Running             storage-provisioner         3                   4a469f9c1abec       storage-provisioner                          kube-system
	11467819ae00f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                 11 minutes ago      Running             coredns                     2                   e653939f7d4f1       coredns-66bc5c9577-hp7hg                     kube-system
	6c38cd6777577       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                 11 minutes ago      Running             kindnet-cni                 2                   9e61830a72691       kindnet-64jnl                                kube-system
	c53873e426c82       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                 11 minutes ago      Running             kube-proxy                  2                   7def203e8aacc       kube-proxy-kvff9                             kube-system
	7cfd82b00a000       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                 11 minutes ago      Running             kube-apiserver              0                   e44f8d75174da       kube-apiserver-functional-666975             kube-system
	d91469046e5e8       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                 11 minutes ago      Running             kube-scheduler              2                   a4d569712cdaf       kube-scheduler-functional-666975             kube-system
	1d0da6d798876       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                 11 minutes ago      Running             kube-controller-manager     2                   fa6b096c38dc4       kube-controller-manager-functional-666975    kube-system
	45d984e911729       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                 11 minutes ago      Running             etcd                        2                   6668f7c4f4793       etcd-functional-666975                       kube-system
	62c47d305962f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 11 minutes ago      Exited              storage-provisioner         2                   4a469f9c1abec       storage-provisioner                          kube-system
	a58e31b2e8a42       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                 11 minutes ago      Exited              etcd                        1                   6668f7c4f4793       etcd-functional-666975                       kube-system
	62109f5a63f94       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                 11 minutes ago      Exited              kube-controller-manager     1                   fa6b096c38dc4       kube-controller-manager-functional-666975    kube-system
	7aaf8f2ce1555       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                 11 minutes ago      Exited              coredns                     1                   e653939f7d4f1       coredns-66bc5c9577-hp7hg                     kube-system
	f8780ec394b31       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                 11 minutes ago      Exited              kube-proxy                  1                   7def203e8aacc       kube-proxy-kvff9                             kube-system
	fd2f9cdbbfc6e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                 11 minutes ago      Exited              kindnet-cni                 1                   9e61830a72691       kindnet-64jnl                                kube-system
	bc163b223ae5d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                 11 minutes ago      Exited              kube-scheduler              1                   a4d569712cdaf       kube-scheduler-functional-666975             kube-system
	
	
	==> coredns [11467819ae00f46f841e806c829ea7d47447caa379be2c8590f173dfc4e1ac14] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45588 - 61878 "HINFO IN 3230016456252713464.2492049995371704803. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018510928s
	
	
	==> coredns [7aaf8f2ce1555460de95846567de849ef99fd0749ed1cf87d8dd88df83e1d7dc] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38232 - 20064 "HINFO IN 564379602601845478.221500101656224781. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.025899217s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-666975
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-666975
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=functional-666975
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_21_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:21:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-666975
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:33:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:33:59 +0000   Mon, 24 Nov 2025 03:21:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:33:59 +0000   Mon, 24 Nov 2025 03:21:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:33:59 +0000   Mon, 24 Nov 2025 03:21:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:33:59 +0000   Mon, 24 Nov 2025 03:22:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-666975
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 304a86241bf1bbb85bd31db5692386d7
	  System UUID:                a461a047-3fa4-4c95-bd84-48f2a9b9eb7b
	  Boot ID:                    e6ca431c-3a35-478f-87f6-f49cc4bc8a65
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-hhtsd                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-q2t7f           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-hp7hg                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-666975                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-64jnl                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-666975              250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-functional-666975     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-kvff9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-666975              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-8sddn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8xnss         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-666975 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-666975 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-666975 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-666975 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-666975 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-666975 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-666975 event: Registered Node functional-666975 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-666975 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-666975 event: Registered Node functional-666975 in Controller
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-666975 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-666975 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-666975 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node functional-666975 event: Registered Node functional-666975 in Controller
	
	
	==> dmesg <==
	[Nov24 01:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014604] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.520213] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036736] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.794505] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.307568] kauditd_printk_skb: 36 callbacks suppressed
	[Nov24 03:08] hrtimer: interrupt took 4583507 ns
	[Nov24 03:11] kauditd_printk_skb: 8 callbacks suppressed
	[Nov24 03:14] overlayfs: idmapped layers are currently not supported
	[  +0.056945] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov24 03:20] overlayfs: idmapped layers are currently not supported
	[Nov24 03:21] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [45d984e911729a0ab059f25ea4a962df55582d378443295beb752a910f8f3bb7] <==
	{"level":"warn","ts":"2025-11-24T03:22:55.143034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:22:55.162631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:22:55.202958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:22:55.223331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:22:55.240781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:22:55.273004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:22:55.331432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:22:55.359373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:22:55.390508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:22:55.414110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:22:55.443241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:22:55.484826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:22:55.503319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:22:55.546543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:22:55.566623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:22:55.589192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:22:55.618419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:22:55.635047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:22:55.678636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:22:55.713988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:22:55.758007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:22:55.901969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45632","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T03:32:53.796469Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1135}
	{"level":"info","ts":"2025-11-24T03:32:53.821115Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1135,"took":"24.354701ms","hash":2251796992,"current-db-size-bytes":3342336,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1449984,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-11-24T03:32:53.821178Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2251796992,"revision":1135,"compact-revision":-1}
	
	
	==> etcd [a58e31b2e8a42960fd3eecd8349ce054b66fdc0ab740dd9296b238a76b143baf] <==
	{"level":"warn","ts":"2025-11-24T03:22:21.347729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:22:21.387235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:22:21.415899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:22:21.440263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:22:21.482643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:22:21.520654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:22:21.614923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45332","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T03:22:42.356067Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-24T03:22:42.356137Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-666975","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-24T03:22:42.356237Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T03:22:42.356303Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T03:22:42.491896Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T03:22:42.491980Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-24T03:22:42.492051Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-24T03:22:42.492071Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-24T03:22:42.492113Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T03:22:42.492188Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T03:22:42.492230Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-24T03:22:42.492329Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T03:22:42.492350Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T03:22:42.492358Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T03:22:42.495891Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-24T03:22:42.495980Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T03:22:42.496017Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-24T03:22:42.496027Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-666975","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 03:34:03 up  2:16,  0 user,  load average: 1.16, 0.64, 1.45
	Linux functional-666975 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6c38cd6777577be19d8a714108af2b409c655ec7551a71ab1f7f82d864f5371f] <==
	I1124 03:31:58.537705       1 main.go:301] handling current node
	I1124 03:32:08.542527       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:32:08.542558       1 main.go:301] handling current node
	I1124 03:32:18.537275       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:32:18.537312       1 main.go:301] handling current node
	I1124 03:32:28.537696       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:32:28.537748       1 main.go:301] handling current node
	I1124 03:32:38.542583       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:32:38.542620       1 main.go:301] handling current node
	I1124 03:32:48.537922       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:32:48.537958       1 main.go:301] handling current node
	I1124 03:32:58.537404       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:32:58.537561       1 main.go:301] handling current node
	I1124 03:33:08.545354       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:33:08.545393       1 main.go:301] handling current node
	I1124 03:33:18.536931       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:33:18.536968       1 main.go:301] handling current node
	I1124 03:33:28.537747       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:33:28.537789       1 main.go:301] handling current node
	I1124 03:33:38.537523       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:33:38.537566       1 main.go:301] handling current node
	I1124 03:33:48.538590       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:33:48.538621       1 main.go:301] handling current node
	I1124 03:33:58.536911       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:33:58.537042       1 main.go:301] handling current node
	
	
	==> kindnet [fd2f9cdbbfc6e124a14015713686273c5fb6a7f6643b4a6bd2c0cf2a5c3fbea9] <==
	I1124 03:22:17.128946       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:22:17.129353       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1124 03:22:17.129493       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:22:17.129505       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:22:17.129518       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:22:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:22:17.407524       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:22:17.418519       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:22:17.418624       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:22:17.419583       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 03:22:22.570937       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 03:22:22.588067       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1124 03:22:22.588159       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 03:22:22.588200       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1124 03:22:24.119532       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:22:24.119569       1 metrics.go:72] Registering metrics
	I1124 03:22:24.119623       1 controller.go:711] "Syncing nftables rules"
	I1124 03:22:27.408044       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:22:27.408098       1 main.go:301] handling current node
	I1124 03:22:37.410567       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 03:22:37.410602       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7cfd82b00a0009534df566ec892c89aa0c4c3c176b2b10f9cb05a18f1b22f34b] <==
	I1124 03:22:57.108760       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 03:22:57.133929       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:22:57.135721       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 03:22:57.135819       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 03:22:57.135951       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	E1124 03:22:57.176430       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 03:22:57.699797       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:22:57.892687       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 03:22:59.434217       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 03:22:59.558311       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 03:22:59.630957       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:22:59.638861       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:23:00.153076       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 03:23:00.337446       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 03:23:00.429978       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:23:15.428463       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.149.80"}
	I1124 03:23:24.388575       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.97.219.174"}
	I1124 03:23:28.038351       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.63.49"}
	E1124 03:23:52.914876       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:55560: use of closed network connection
	E1124 03:24:00.578680       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:55592: use of closed network connection
	I1124 03:24:00.939636       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.106.100.64"}
	I1124 03:32:56.957091       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:33:44.336458       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 03:33:44.641708       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.127.246"}
	I1124 03:33:44.662814       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.120.213"}
	
	
	==> kube-controller-manager [1d0da6d798876e9939e60973204d0ea5deb1dfd5f08c7546d164a4fab5f5cbc2] <==
	I1124 03:23:00.059126       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-666975"
	I1124 03:23:00.059180       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 03:23:00.062342       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 03:23:00.064441       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 03:23:00.064489       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 03:23:00.064514       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 03:23:00.064598       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 03:23:00.066926       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 03:23:00.070201       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 03:23:00.082209       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 03:23:00.082353       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 03:23:00.085282       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:23:00.165029       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1124 03:23:00.245612       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:23:00.245646       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 03:23:00.245656       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 03:23:00.265405       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1124 03:33:44.450095       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 03:33:44.468785       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 03:33:44.469468       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 03:33:44.481568       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 03:33:44.492310       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 03:33:44.492810       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 03:33:44.497291       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 03:33:44.499353       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [62109f5a63f946871219bcaeb88e56c7c644880621157904621e2348cb0dda9c] <==
	I1124 03:22:25.870594       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 03:22:25.870603       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 03:22:25.870570       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 03:22:25.871916       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 03:22:25.874078       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 03:22:25.877748       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 03:22:25.881091       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:22:25.886506       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 03:22:25.889670       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 03:22:25.890870       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 03:22:25.890943       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 03:22:25.892115       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 03:22:25.894394       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 03:22:25.897684       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 03:22:25.897832       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:22:25.916610       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 03:22:25.916627       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 03:22:25.916728       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 03:22:25.916798       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 03:22:25.916709       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 03:22:25.916797       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 03:22:25.917230       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-666975"
	I1124 03:22:25.917277       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 03:22:25.926353       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 03:22:25.926361       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [c53873e426c82f246b04d01a4a9cf91901306ac2a9a68e2c95c0e3cf274aeea3] <==
	I1124 03:22:58.645846       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:22:58.763750       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:22:58.865705       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:22:58.866056       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 03:22:58.866159       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:22:58.938021       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:22:58.938073       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:22:58.952707       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:22:58.953000       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:22:58.953024       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:22:58.954126       1 config.go:200] "Starting service config controller"
	I1124 03:22:58.954150       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:22:58.967288       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:22:58.967324       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:22:58.967349       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:22:58.967354       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:22:58.969212       1 config.go:309] "Starting node config controller"
	I1124 03:22:58.969234       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:22:59.055009       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 03:22:59.084671       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 03:22:59.084688       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 03:22:59.084703       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [f8780ec394b3127a0cd31cc71c9a513663a1611b36af7c3312a4c3adc3591156] <==
	I1124 03:22:17.559643       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:22:20.225779       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:22:22.771441       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:22:22.771475       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 03:22:22.771578       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:22:22.869006       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:22:22.869825       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:22:22.875657       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:22:22.887352       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:22:22.887451       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:22:22.889046       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:22:22.889127       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:22:22.889447       1 config.go:200] "Starting service config controller"
	I1124 03:22:22.889527       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:22:22.903444       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:22:22.903541       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:22:22.904106       1 config.go:309] "Starting node config controller"
	I1124 03:22:22.904128       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:22:22.904137       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 03:22:22.989329       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 03:22:22.990491       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 03:22:23.003695       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [bc163b223ae5d7712d9c90f2b3f01c2c1a3cc90b21407bb5237da387b7cee79f] <==
	I1124 03:22:20.779891       1 serving.go:386] Generated self-signed cert in-memory
	W1124 03:22:22.623070       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 03:22:22.623123       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 03:22:22.623135       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 03:22:22.623143       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 03:22:22.700522       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 03:22:22.700549       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:22:22.716628       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 03:22:22.717342       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:22:22.717369       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:22:22.717395       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 03:22:22.818155       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:22:42.365559       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1124 03:22:42.365605       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1124 03:22:42.365641       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1124 03:22:42.365672       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:22:42.366974       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1124 03:22:42.367009       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d91469046e5e8fc638a7af83b93e20d2d383cb271a7fe25b1debd3fd5d84826c] <==
	I1124 03:22:57.074444       1 serving.go:386] Generated self-signed cert in-memory
	I1124 03:22:58.486187       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 03:22:58.502609       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:22:58.516288       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 03:22:58.516337       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 03:22:58.516383       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:22:58.516397       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:22:58.516421       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 03:22:58.516434       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 03:22:58.517725       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 03:22:58.517850       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 03:22:58.618486       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 03:22:58.618620       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1124 03:22:58.618768       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:33:05 functional-666975 kubelet[3844]: E1124 03:33:05.796096    3844 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hhtsd" podUID="dfe95e89-35af-4d1a-89f1-d2e2f67de243"
	Nov 24 03:33:10 functional-666975 kubelet[3844]: E1124 03:33:10.796311    3844 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-q2t7f" podUID="2bb3d8fc-cd52-41c7-b84b-7a9839e16a3f"
	Nov 24 03:33:19 functional-666975 kubelet[3844]: E1124 03:33:19.795567    3844 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hhtsd" podUID="dfe95e89-35af-4d1a-89f1-d2e2f67de243"
	Nov 24 03:33:24 functional-666975 kubelet[3844]: E1124 03:33:24.796201    3844 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-q2t7f" podUID="2bb3d8fc-cd52-41c7-b84b-7a9839e16a3f"
	Nov 24 03:33:31 functional-666975 kubelet[3844]: E1124 03:33:31.796157    3844 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hhtsd" podUID="dfe95e89-35af-4d1a-89f1-d2e2f67de243"
	Nov 24 03:33:34 functional-666975 kubelet[3844]: I1124 03:33:34.097356    3844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/d4245ccc-74e9-4044-afcc-43d4ed5ce425-test-volume\") pod \"busybox-mount\" (UID: \"d4245ccc-74e9-4044-afcc-43d4ed5ce425\") " pod="default/busybox-mount"
	Nov 24 03:33:34 functional-666975 kubelet[3844]: I1124 03:33:34.097420    3844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scbk7\" (UniqueName: \"kubernetes.io/projected/d4245ccc-74e9-4044-afcc-43d4ed5ce425-kube-api-access-scbk7\") pod \"busybox-mount\" (UID: \"d4245ccc-74e9-4044-afcc-43d4ed5ce425\") " pod="default/busybox-mount"
	Nov 24 03:33:35 functional-666975 kubelet[3844]: E1124 03:33:35.796263    3844 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-q2t7f" podUID="2bb3d8fc-cd52-41c7-b84b-7a9839e16a3f"
	Nov 24 03:33:37 functional-666975 kubelet[3844]: I1124 03:33:37.722339    3844 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/d4245ccc-74e9-4044-afcc-43d4ed5ce425-test-volume\") pod \"d4245ccc-74e9-4044-afcc-43d4ed5ce425\" (UID: \"d4245ccc-74e9-4044-afcc-43d4ed5ce425\") "
	Nov 24 03:33:37 functional-666975 kubelet[3844]: I1124 03:33:37.722416    3844 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scbk7\" (UniqueName: \"kubernetes.io/projected/d4245ccc-74e9-4044-afcc-43d4ed5ce425-kube-api-access-scbk7\") pod \"d4245ccc-74e9-4044-afcc-43d4ed5ce425\" (UID: \"d4245ccc-74e9-4044-afcc-43d4ed5ce425\") "
	Nov 24 03:33:37 functional-666975 kubelet[3844]: I1124 03:33:37.722961    3844 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4245ccc-74e9-4044-afcc-43d4ed5ce425-test-volume" (OuterVolumeSpecName: "test-volume") pod "d4245ccc-74e9-4044-afcc-43d4ed5ce425" (UID: "d4245ccc-74e9-4044-afcc-43d4ed5ce425"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 24 03:33:37 functional-666975 kubelet[3844]: I1124 03:33:37.726701    3844 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4245ccc-74e9-4044-afcc-43d4ed5ce425-kube-api-access-scbk7" (OuterVolumeSpecName: "kube-api-access-scbk7") pod "d4245ccc-74e9-4044-afcc-43d4ed5ce425" (UID: "d4245ccc-74e9-4044-afcc-43d4ed5ce425"). InnerVolumeSpecName "kube-api-access-scbk7". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 24 03:33:37 functional-666975 kubelet[3844]: I1124 03:33:37.823375    3844 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-scbk7\" (UniqueName: \"kubernetes.io/projected/d4245ccc-74e9-4044-afcc-43d4ed5ce425-kube-api-access-scbk7\") on node \"functional-666975\" DevicePath \"\""
	Nov 24 03:33:37 functional-666975 kubelet[3844]: I1124 03:33:37.823416    3844 reconciler_common.go:299] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/d4245ccc-74e9-4044-afcc-43d4ed5ce425-test-volume\") on node \"functional-666975\" DevicePath \"\""
	Nov 24 03:33:38 functional-666975 kubelet[3844]: I1124 03:33:38.576758    3844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9392b97b256db30e411993124c4be23d970ef82a5657f5e92a383ec5e56f6108"
	Nov 24 03:33:44 functional-666975 kubelet[3844]: I1124 03:33:44.673754    3844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f1138673-417e-480b-86a1-9843f58ff1c6-tmp-volume\") pod \"dashboard-metrics-scraper-77bf4d6c4c-8sddn\" (UID: \"f1138673-417e-480b-86a1-9843f58ff1c6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-8sddn"
	Nov 24 03:33:44 functional-666975 kubelet[3844]: I1124 03:33:44.673828    3844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4wt2\" (UniqueName: \"kubernetes.io/projected/f1138673-417e-480b-86a1-9843f58ff1c6-kube-api-access-n4wt2\") pod \"dashboard-metrics-scraper-77bf4d6c4c-8sddn\" (UID: \"f1138673-417e-480b-86a1-9843f58ff1c6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-8sddn"
	Nov 24 03:33:44 functional-666975 kubelet[3844]: I1124 03:33:44.673865    3844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6np6\" (UniqueName: \"kubernetes.io/projected/46e85963-ed1c-42d0-abf2-b49ee92fa14a-kube-api-access-t6np6\") pod \"kubernetes-dashboard-855c9754f9-8xnss\" (UID: \"46e85963-ed1c-42d0-abf2-b49ee92fa14a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8xnss"
	Nov 24 03:33:44 functional-666975 kubelet[3844]: I1124 03:33:44.673910    3844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/46e85963-ed1c-42d0-abf2-b49ee92fa14a-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-8xnss\" (UID: \"46e85963-ed1c-42d0-abf2-b49ee92fa14a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8xnss"
	Nov 24 03:33:44 functional-666975 kubelet[3844]: W1124 03:33:44.902678    3844 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5e7087edd018071d735ca2000a0809b8172c249f2081d7d570329c2ea300766b/crio-7710c4ffdbbe63b214841eb8e9050a872b1d3126a3bd508cd2c80a947a710490 WatchSource:0}: Error finding container 7710c4ffdbbe63b214841eb8e9050a872b1d3126a3bd508cd2c80a947a710490: Status 404 returned error can't find the container with id 7710c4ffdbbe63b214841eb8e9050a872b1d3126a3bd508cd2c80a947a710490
	Nov 24 03:33:45 functional-666975 kubelet[3844]: E1124 03:33:45.796428    3844 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hhtsd" podUID="dfe95e89-35af-4d1a-89f1-d2e2f67de243"
	Nov 24 03:33:49 functional-666975 kubelet[3844]: E1124 03:33:49.796642    3844 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-q2t7f" podUID="2bb3d8fc-cd52-41c7-b84b-7a9839e16a3f"
	Nov 24 03:33:50 functional-666975 kubelet[3844]: I1124 03:33:50.639138    3844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8xnss" podStartSLOduration=2.5165891179999997 podStartE2EDuration="6.639116999s" podCreationTimestamp="2025-11-24 03:33:44 +0000 UTC" firstStartedPulling="2025-11-24 03:33:44.879677891 +0000 UTC m=+652.244553874" lastFinishedPulling="2025-11-24 03:33:49.002205764 +0000 UTC m=+656.367081755" observedRunningTime="2025-11-24 03:33:49.636618093 +0000 UTC m=+657.001494109" watchObservedRunningTime="2025-11-24 03:33:50.639116999 +0000 UTC m=+658.003992982"
	Nov 24 03:33:56 functional-666975 kubelet[3844]: E1124 03:33:56.796554    3844 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hhtsd" podUID="dfe95e89-35af-4d1a-89f1-d2e2f67de243"
	Nov 24 03:34:00 functional-666975 kubelet[3844]: E1124 03:34:00.796747    3844 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-q2t7f" podUID="2bb3d8fc-cd52-41c7-b84b-7a9839e16a3f"
	
	
	==> kubernetes-dashboard [d2350540d69c87ed0ba32abb1498fe38b20796b5c674d26e07990bb42db127f6] <==
	2025/11/24 03:33:49 Starting overwatch
	2025/11/24 03:33:49 Using namespace: kubernetes-dashboard
	2025/11/24 03:33:49 Using in-cluster config to connect to apiserver
	2025/11/24 03:33:49 Using secret token for csrf signing
	2025/11/24 03:33:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 03:33:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 03:33:49 Successful initial request to the apiserver, version: v1.34.1
	2025/11/24 03:33:49 Generating JWE encryption key
	2025/11/24 03:33:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 03:33:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 03:33:49 Initializing JWE encryption key from synchronized object
	2025/11/24 03:33:49 Creating in-cluster Sidecar client
	2025/11/24 03:33:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 03:33:49 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [588bef5b1b6192847b377f237a1413d8d1c4a26c8da0846d3776055dff96c228] <==
	W1124 03:33:38.937040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:33:40.940942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:33:40.947256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:33:42.954617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:33:42.963402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:33:44.966798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:33:44.971286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:33:46.976575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:33:46.981858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:33:48.984961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:33:48.989982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:33:50.993024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:33:51.001277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:33:53.005172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:33:53.011478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:33:55.015555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:33:55.020861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:33:57.024329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:33:57.028928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:33:59.031716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:33:59.038357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:34:01.042574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:34:01.050303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:34:03.053639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:34:03.059410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [62c47d305962f6e2af7fca4fe71bf451ef7436e2df31f591dd7929b766a69189] <==
	I1124 03:22:31.742164       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 03:22:31.754362       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 03:22:31.754412       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 03:22:31.756770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:22:35.211305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:22:39.473009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-666975 -n functional-666975
helpers_test.go:269: (dbg) Run:  kubectl --context functional-666975 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-hhtsd hello-node-connect-7d85dfc575-q2t7f
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-666975 describe pod busybox-mount hello-node-75c85bcc94-hhtsd hello-node-connect-7d85dfc575-q2t7f
helpers_test.go:290: (dbg) kubectl --context functional-666975 describe pod busybox-mount hello-node-75c85bcc94-hhtsd hello-node-connect-7d85dfc575-q2t7f:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-666975/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 03:33:33 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://bb157daff1add080818feeccc9d7b75de3f4cb03293b4f92d9592c98f1569758
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 24 Nov 2025 03:33:36 +0000
	      Finished:     Mon, 24 Nov 2025 03:33:36 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-scbk7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-scbk7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  30s   default-scheduler  Successfully assigned default/busybox-mount to functional-666975
	  Normal  Pulling    30s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     28s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.043s (2.043s including waiting). Image size: 3774172 bytes.
	  Normal  Created    28s   kubelet            Created container: mount-munger
	  Normal  Started    28s   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-hhtsd
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-666975/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 03:23:24 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rc9zj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rc9zj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-hhtsd to functional-666975
	  Normal   Pulling    7m51s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m51s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m51s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    33s (x42 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     33s (x42 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-q2t7f
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-666975/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 03:24:00 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-grwmv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-grwmv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-q2t7f to functional-666975
	  Normal   Pulling    7m14s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m14s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m14s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m53s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m53s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 image load --daemon kicbase/echo-server:functional-666975 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-666975" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 image load --daemon kicbase/echo-server:functional-666975 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-666975" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-666975
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 image load --daemon kicbase/echo-server:functional-666975 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-666975" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-666975 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-666975 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-hhtsd" [dfe95e89-35af-4d1a-89f1-d2e2f67de243] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-666975 -n functional-666975
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-24 03:33:24.743250198 +0000 UTC m=+1259.040187970
functional_test.go:1460: (dbg) Run:  kubectl --context functional-666975 describe po hello-node-75c85bcc94-hhtsd -n default
functional_test.go:1460: (dbg) kubectl --context functional-666975 describe po hello-node-75c85bcc94-hhtsd -n default:
Name:             hello-node-75c85bcc94-hhtsd
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-666975/192.168.49.2
Start Time:       Mon, 24 Nov 2025 03:23:24 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rc9zj (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-rc9zj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-hhtsd to functional-666975
Normal   Pulling    7m11s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m11s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m11s (x5 over 10m)     kubelet            Error: ErrImagePull
Warning  Failed     4m52s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m39s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-666975 logs hello-node-75c85bcc94-hhtsd -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-666975 logs hello-node-75c85bcc94-hhtsd -n default: exit status 1 (102.78284ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-hhtsd" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-666975 logs hello-node-75c85bcc94-hhtsd -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 image save kicbase/echo-server:functional-666975 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1124 03:23:26.132452  315038 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:23:26.132718  315038 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:23:26.132754  315038 out.go:374] Setting ErrFile to fd 2...
	I1124 03:23:26.132776  315038 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:23:26.133064  315038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 03:23:26.133726  315038 config.go:182] Loaded profile config "functional-666975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:23:26.133893  315038 config.go:182] Loaded profile config "functional-666975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:23:26.134502  315038 cli_runner.go:164] Run: docker container inspect functional-666975 --format={{.State.Status}}
	I1124 03:23:26.151453  315038 ssh_runner.go:195] Run: systemctl --version
	I1124 03:23:26.151505  315038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-666975
	I1124 03:23:26.168997  315038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/functional-666975/id_rsa Username:docker}
	I1124 03:23:26.268757  315038 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1124 03:23:26.268861  315038 cache_images.go:255] Failed to load cached images for "functional-666975": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1124 03:23:26.268889  315038 cache_images.go:267] failed pushing to: functional-666975

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-666975
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 image save --daemon kicbase/echo-server:functional-666975 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-666975
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-666975: exit status 1 (18.763337ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-666975

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-666975

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-666975 service --namespace=default --https --url hello-node: exit status 115 (387.697079ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31654
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-666975 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-666975 service hello-node --url --format={{.IP}}: exit status 115 (392.19641ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-666975 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-666975 service hello-node --url: exit status 115 (402.923749ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31654
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-666975 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31654
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.28s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-901209 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-901209 --output=json --user=testUser: exit status 80 (2.27623522s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"39231ade-dfa2-48e3-aa57-3d93aaefd602","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-901209 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"dabd733a-46f3-441a-bfde-37c7f47ea9b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-24T03:47:04Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"36007736-ff2e-43cb-a354-0adfc182d6ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-901209 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.28s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-901209 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-901209 --output=json --user=testUser: exit status 80 (1.658991729s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bba5ef18-70f5-4526-8999-e5c6776b5c6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-901209 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"f75eb879-e7e2-4f16-856c-4d1617ccd7f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-24T03:47:06Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"0af13ebd-b552-4209-887a-872a6c9dd91d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-901209 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.66s)

                                                
                                    
x
+
TestPause/serial/Pause (6.75s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-396108 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-396108 --alsologtostderr -v=5: exit status 80 (1.793129708s)

                                                
                                                
-- stdout --
	* Pausing node pause-396108 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 04:09:41.920390  453802 out.go:360] Setting OutFile to fd 1 ...
	I1124 04:09:41.922550  453802 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:09:41.922603  453802 out.go:374] Setting ErrFile to fd 2...
	I1124 04:09:41.922624  453802 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:09:41.922948  453802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 04:09:41.923272  453802 out.go:368] Setting JSON to false
	I1124 04:09:41.923325  453802 mustload.go:66] Loading cluster: pause-396108
	I1124 04:09:41.923810  453802 config.go:182] Loaded profile config "pause-396108": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:09:41.924365  453802 cli_runner.go:164] Run: docker container inspect pause-396108 --format={{.State.Status}}
	I1124 04:09:41.942312  453802 host.go:66] Checking if "pause-396108" exists ...
	I1124 04:09:41.942658  453802 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:09:42.061166  453802 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-24 04:09:42.043003063 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:09:42.062291  453802 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763935228-21975/minikube-v1.37.0-1763935228-21975-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763935228-21975-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-396108 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 04:09:42.074241  453802 out.go:179] * Pausing node pause-396108 ... 
	I1124 04:09:42.077480  453802 host.go:66] Checking if "pause-396108" exists ...
	I1124 04:09:42.078262  453802 ssh_runner.go:195] Run: systemctl --version
	I1124 04:09:42.078376  453802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-396108
	I1124 04:09:42.115439  453802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33396 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/pause-396108/id_rsa Username:docker}
	I1124 04:09:42.232254  453802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:09:42.249899  453802 pause.go:52] kubelet running: true
	I1124 04:09:42.249981  453802 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 04:09:42.497882  453802 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 04:09:42.497969  453802 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 04:09:42.570045  453802 cri.go:89] found id: "510de710da2ba8ea819f9bd7d6008fc188c474758dc0982bc622288bdaf88a09"
	I1124 04:09:42.570111  453802 cri.go:89] found id: "ec93103e13f8d61b3bbca237e6337d82343ed31e17f5875e3885f8f49c8ba154"
	I1124 04:09:42.570130  453802 cri.go:89] found id: "1b824daf1dec3f99a1d5c16e27c0cfc32418b57d8ec747134d3686b45610c51f"
	I1124 04:09:42.570154  453802 cri.go:89] found id: "9d6051a594d1510f507c94cbd06679c59fda5ce1b35b721c4b945044a6c20cff"
	I1124 04:09:42.570171  453802 cri.go:89] found id: "5774c046abe9043aadbfd6b9c4831cb9ed1b5f813e056e0057a80407ea6f6d2f"
	I1124 04:09:42.570207  453802 cri.go:89] found id: "e043bea1c710a5ce21da1af2f69f48cc7f408d94c02ad317ef742420e6047668"
	I1124 04:09:42.570226  453802 cri.go:89] found id: "848b7a5b1960bf771106d3ade5f36482eb5247f3fdffae808cd1b74ec8b48cb5"
	I1124 04:09:42.570248  453802 cri.go:89] found id: "490bbd4ce436cb05c5881f746e8020778291193555fa5f32a43ae3598eddbd0d"
	I1124 04:09:42.570266  453802 cri.go:89] found id: "c8a48af8745d6e7ed60d98860bbdd720ffd910fb5f4ca540179f0c1ded57c194"
	I1124 04:09:42.570296  453802 cri.go:89] found id: "11beefbf1a8a00864fe3381844d37aedc1f9ce9831b394bb7bdc1ded4239d89e"
	I1124 04:09:42.570319  453802 cri.go:89] found id: "c7f699f1cc0ef484fe9224d86d2fb5cdd924b5f1e89ed3e12a85d92c72cc378e"
	I1124 04:09:42.570340  453802 cri.go:89] found id: "d013da0147b8934c86c12720efd80ec59f61d17784274267bb11bc96acac4c94"
	I1124 04:09:42.570359  453802 cri.go:89] found id: "5b712eafb8e889f1f03dcbca8e8cff4e20805686a79699de0f7ac21affd0b9f9"
	I1124 04:09:42.570379  453802 cri.go:89] found id: "94b696efd6b4382c567f3d5ef6f1fd9532fd20a48c4c2f66e053c1586cb5b17e"
	I1124 04:09:42.570405  453802 cri.go:89] found id: ""
	I1124 04:09:42.570513  453802 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 04:09:42.581979  453802 retry.go:31] will retry after 326.456678ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:09:42Z" level=error msg="open /run/runc: no such file or directory"
	I1124 04:09:42.909645  453802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:09:42.922438  453802 pause.go:52] kubelet running: false
	I1124 04:09:42.922530  453802 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 04:09:43.082788  453802 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 04:09:43.082865  453802 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 04:09:43.154083  453802 cri.go:89] found id: "510de710da2ba8ea819f9bd7d6008fc188c474758dc0982bc622288bdaf88a09"
	I1124 04:09:43.154108  453802 cri.go:89] found id: "ec93103e13f8d61b3bbca237e6337d82343ed31e17f5875e3885f8f49c8ba154"
	I1124 04:09:43.154113  453802 cri.go:89] found id: "1b824daf1dec3f99a1d5c16e27c0cfc32418b57d8ec747134d3686b45610c51f"
	I1124 04:09:43.154117  453802 cri.go:89] found id: "9d6051a594d1510f507c94cbd06679c59fda5ce1b35b721c4b945044a6c20cff"
	I1124 04:09:43.154120  453802 cri.go:89] found id: "5774c046abe9043aadbfd6b9c4831cb9ed1b5f813e056e0057a80407ea6f6d2f"
	I1124 04:09:43.154123  453802 cri.go:89] found id: "e043bea1c710a5ce21da1af2f69f48cc7f408d94c02ad317ef742420e6047668"
	I1124 04:09:43.154126  453802 cri.go:89] found id: "848b7a5b1960bf771106d3ade5f36482eb5247f3fdffae808cd1b74ec8b48cb5"
	I1124 04:09:43.154129  453802 cri.go:89] found id: "490bbd4ce436cb05c5881f746e8020778291193555fa5f32a43ae3598eddbd0d"
	I1124 04:09:43.154138  453802 cri.go:89] found id: "c8a48af8745d6e7ed60d98860bbdd720ffd910fb5f4ca540179f0c1ded57c194"
	I1124 04:09:43.154147  453802 cri.go:89] found id: "11beefbf1a8a00864fe3381844d37aedc1f9ce9831b394bb7bdc1ded4239d89e"
	I1124 04:09:43.154151  453802 cri.go:89] found id: "c7f699f1cc0ef484fe9224d86d2fb5cdd924b5f1e89ed3e12a85d92c72cc378e"
	I1124 04:09:43.154154  453802 cri.go:89] found id: "d013da0147b8934c86c12720efd80ec59f61d17784274267bb11bc96acac4c94"
	I1124 04:09:43.154157  453802 cri.go:89] found id: "5b712eafb8e889f1f03dcbca8e8cff4e20805686a79699de0f7ac21affd0b9f9"
	I1124 04:09:43.154160  453802 cri.go:89] found id: "94b696efd6b4382c567f3d5ef6f1fd9532fd20a48c4c2f66e053c1586cb5b17e"
	I1124 04:09:43.154163  453802 cri.go:89] found id: ""
	I1124 04:09:43.154213  453802 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 04:09:43.165862  453802 retry.go:31] will retry after 191.439781ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:09:43Z" level=error msg="open /run/runc: no such file or directory"
	I1124 04:09:43.358272  453802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:09:43.371451  453802 pause.go:52] kubelet running: false
	I1124 04:09:43.371529  453802 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 04:09:43.539986  453802 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 04:09:43.540065  453802 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 04:09:43.614298  453802 cri.go:89] found id: "510de710da2ba8ea819f9bd7d6008fc188c474758dc0982bc622288bdaf88a09"
	I1124 04:09:43.614319  453802 cri.go:89] found id: "ec93103e13f8d61b3bbca237e6337d82343ed31e17f5875e3885f8f49c8ba154"
	I1124 04:09:43.614324  453802 cri.go:89] found id: "1b824daf1dec3f99a1d5c16e27c0cfc32418b57d8ec747134d3686b45610c51f"
	I1124 04:09:43.614327  453802 cri.go:89] found id: "9d6051a594d1510f507c94cbd06679c59fda5ce1b35b721c4b945044a6c20cff"
	I1124 04:09:43.614330  453802 cri.go:89] found id: "5774c046abe9043aadbfd6b9c4831cb9ed1b5f813e056e0057a80407ea6f6d2f"
	I1124 04:09:43.614334  453802 cri.go:89] found id: "e043bea1c710a5ce21da1af2f69f48cc7f408d94c02ad317ef742420e6047668"
	I1124 04:09:43.614337  453802 cri.go:89] found id: "848b7a5b1960bf771106d3ade5f36482eb5247f3fdffae808cd1b74ec8b48cb5"
	I1124 04:09:43.614340  453802 cri.go:89] found id: "490bbd4ce436cb05c5881f746e8020778291193555fa5f32a43ae3598eddbd0d"
	I1124 04:09:43.614344  453802 cri.go:89] found id: "c8a48af8745d6e7ed60d98860bbdd720ffd910fb5f4ca540179f0c1ded57c194"
	I1124 04:09:43.614350  453802 cri.go:89] found id: "11beefbf1a8a00864fe3381844d37aedc1f9ce9831b394bb7bdc1ded4239d89e"
	I1124 04:09:43.614354  453802 cri.go:89] found id: "c7f699f1cc0ef484fe9224d86d2fb5cdd924b5f1e89ed3e12a85d92c72cc378e"
	I1124 04:09:43.614357  453802 cri.go:89] found id: "d013da0147b8934c86c12720efd80ec59f61d17784274267bb11bc96acac4c94"
	I1124 04:09:43.614360  453802 cri.go:89] found id: "5b712eafb8e889f1f03dcbca8e8cff4e20805686a79699de0f7ac21affd0b9f9"
	I1124 04:09:43.614366  453802 cri.go:89] found id: "94b696efd6b4382c567f3d5ef6f1fd9532fd20a48c4c2f66e053c1586cb5b17e"
	I1124 04:09:43.614373  453802 cri.go:89] found id: ""
	I1124 04:09:43.614433  453802 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 04:09:43.628857  453802 out.go:203] 
	W1124 04:09:43.631737  453802 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:09:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:09:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 04:09:43.631758  453802 out.go:285] * 
	* 
	W1124 04:09:43.637798  453802 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 04:09:43.640743  453802 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-396108 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-396108
helpers_test.go:243: (dbg) docker inspect pause-396108:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "057fcb24b84cefb9aedb4f6424d932811c2f4a2fac22933f0d90412e3f492f9d",
	        "Created": "2025-11-24T04:07:55.891399667Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 447988,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T04:07:55.95087115Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fbb44bc62521f331457dff002aaa5e1e27856f9e53853b3b3ee62969be454028",
	        "ResolvConfPath": "/var/lib/docker/containers/057fcb24b84cefb9aedb4f6424d932811c2f4a2fac22933f0d90412e3f492f9d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/057fcb24b84cefb9aedb4f6424d932811c2f4a2fac22933f0d90412e3f492f9d/hostname",
	        "HostsPath": "/var/lib/docker/containers/057fcb24b84cefb9aedb4f6424d932811c2f4a2fac22933f0d90412e3f492f9d/hosts",
	        "LogPath": "/var/lib/docker/containers/057fcb24b84cefb9aedb4f6424d932811c2f4a2fac22933f0d90412e3f492f9d/057fcb24b84cefb9aedb4f6424d932811c2f4a2fac22933f0d90412e3f492f9d-json.log",
	        "Name": "/pause-396108",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-396108:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-396108",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "057fcb24b84cefb9aedb4f6424d932811c2f4a2fac22933f0d90412e3f492f9d",
	                "LowerDir": "/var/lib/docker/overlay2/a8afb836e55c54139d976e57b05e90be0e57acec71a29ada2352540504372b50-init/diff:/var/lib/docker/overlay2/d0aa28be488ed1454a92ae6ce1d851cbc3e880fd8a322d63788b1835a346ec13/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a8afb836e55c54139d976e57b05e90be0e57acec71a29ada2352540504372b50/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a8afb836e55c54139d976e57b05e90be0e57acec71a29ada2352540504372b50/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a8afb836e55c54139d976e57b05e90be0e57acec71a29ada2352540504372b50/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-396108",
	                "Source": "/var/lib/docker/volumes/pause-396108/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-396108",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-396108",
	                "name.minikube.sigs.k8s.io": "pause-396108",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b5b53415c9727420745f4408e8c207f52a72d53b915f23812bee1eebe926b61d",
	            "SandboxKey": "/var/run/docker/netns/b5b53415c972",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33396"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33397"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33400"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33398"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33399"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-396108": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:fc:d4:91:90:f2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1bc9e483a8917fcc6ac0fd77591b3183468ef19521a99971e75d95e0eb70d15c",
	                    "EndpointID": "dd7b7c0070075c1e5cf0961d7679a85a877ad50a78691d1311ce0cdbd5af0635",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-396108",
	                        "057fcb24b84c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-396108 -n pause-396108
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-396108 -n pause-396108: exit status 2 (359.159841ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-396108 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-396108 logs -n 25: (1.637015499s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-314310 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-314310       │ jenkins │ v1.37.0 │ 24 Nov 25 04:03 UTC │ 24 Nov 25 04:04 UTC │
	│ start   │ -p missing-upgrade-935894 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-935894    │ jenkins │ v1.32.0 │ 24 Nov 25 04:03 UTC │ 24 Nov 25 04:04 UTC │
	│ start   │ -p NoKubernetes-314310 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-314310       │ jenkins │ v1.37.0 │ 24 Nov 25 04:04 UTC │ 24 Nov 25 04:04 UTC │
	│ delete  │ -p NoKubernetes-314310                                                                                                                   │ NoKubernetes-314310       │ jenkins │ v1.37.0 │ 24 Nov 25 04:04 UTC │ 24 Nov 25 04:04 UTC │
	│ start   │ -p NoKubernetes-314310 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-314310       │ jenkins │ v1.37.0 │ 24 Nov 25 04:04 UTC │ 24 Nov 25 04:04 UTC │
	│ start   │ -p missing-upgrade-935894 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-935894    │ jenkins │ v1.37.0 │ 24 Nov 25 04:04 UTC │ 24 Nov 25 04:05 UTC │
	│ ssh     │ -p NoKubernetes-314310 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-314310       │ jenkins │ v1.37.0 │ 24 Nov 25 04:04 UTC │                     │
	│ stop    │ -p NoKubernetes-314310                                                                                                                   │ NoKubernetes-314310       │ jenkins │ v1.37.0 │ 24 Nov 25 04:04 UTC │ 24 Nov 25 04:04 UTC │
	│ start   │ -p NoKubernetes-314310 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-314310       │ jenkins │ v1.37.0 │ 24 Nov 25 04:04 UTC │ 24 Nov 25 04:05 UTC │
	│ ssh     │ -p NoKubernetes-314310 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-314310       │ jenkins │ v1.37.0 │ 24 Nov 25 04:05 UTC │                     │
	│ delete  │ -p NoKubernetes-314310                                                                                                                   │ NoKubernetes-314310       │ jenkins │ v1.37.0 │ 24 Nov 25 04:05 UTC │ 24 Nov 25 04:05 UTC │
	│ start   │ -p kubernetes-upgrade-207884 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-207884 │ jenkins │ v1.37.0 │ 24 Nov 25 04:05 UTC │ 24 Nov 25 04:05 UTC │
	│ delete  │ -p missing-upgrade-935894                                                                                                                │ missing-upgrade-935894    │ jenkins │ v1.37.0 │ 24 Nov 25 04:05 UTC │ 24 Nov 25 04:05 UTC │
	│ stop    │ -p kubernetes-upgrade-207884                                                                                                             │ kubernetes-upgrade-207884 │ jenkins │ v1.37.0 │ 24 Nov 25 04:05 UTC │ 24 Nov 25 04:05 UTC │
	│ start   │ -p stopped-upgrade-191757 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-191757    │ jenkins │ v1.32.0 │ 24 Nov 25 04:05 UTC │ 24 Nov 25 04:06 UTC │
	│ start   │ -p kubernetes-upgrade-207884 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-207884 │ jenkins │ v1.37.0 │ 24 Nov 25 04:05 UTC │                     │
	│ stop    │ stopped-upgrade-191757 stop                                                                                                              │ stopped-upgrade-191757    │ jenkins │ v1.32.0 │ 24 Nov 25 04:06 UTC │ 24 Nov 25 04:06 UTC │
	│ start   │ -p stopped-upgrade-191757 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-191757    │ jenkins │ v1.37.0 │ 24 Nov 25 04:06 UTC │ 24 Nov 25 04:06 UTC │
	│ delete  │ -p stopped-upgrade-191757                                                                                                                │ stopped-upgrade-191757    │ jenkins │ v1.37.0 │ 24 Nov 25 04:06 UTC │ 24 Nov 25 04:06 UTC │
	│ start   │ -p running-upgrade-352504 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-352504    │ jenkins │ v1.32.0 │ 24 Nov 25 04:06 UTC │ 24 Nov 25 04:07 UTC │
	│ start   │ -p running-upgrade-352504 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-352504    │ jenkins │ v1.37.0 │ 24 Nov 25 04:07 UTC │ 24 Nov 25 04:07 UTC │
	│ delete  │ -p running-upgrade-352504                                                                                                                │ running-upgrade-352504    │ jenkins │ v1.37.0 │ 24 Nov 25 04:07 UTC │ 24 Nov 25 04:07 UTC │
	│ start   │ -p pause-396108 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-396108              │ jenkins │ v1.37.0 │ 24 Nov 25 04:07 UTC │ 24 Nov 25 04:09 UTC │
	│ start   │ -p pause-396108 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-396108              │ jenkins │ v1.37.0 │ 24 Nov 25 04:09 UTC │ 24 Nov 25 04:09 UTC │
	│ pause   │ -p pause-396108 --alsologtostderr -v=5                                                                                                   │ pause-396108              │ jenkins │ v1.37.0 │ 24 Nov 25 04:09 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 04:09:12
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 04:09:12.750350  451987 out.go:360] Setting OutFile to fd 1 ...
	I1124 04:09:12.750617  451987 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:09:12.750631  451987 out.go:374] Setting ErrFile to fd 2...
	I1124 04:09:12.750637  451987 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:09:12.750902  451987 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 04:09:12.751309  451987 out.go:368] Setting JSON to false
	I1124 04:09:12.752302  451987 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10282,"bootTime":1763947071,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 04:09:12.752378  451987 start.go:143] virtualization:  
	I1124 04:09:12.755378  451987 out.go:179] * [pause-396108] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 04:09:12.759439  451987 notify.go:221] Checking for updates...
	I1124 04:09:12.763205  451987 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 04:09:12.766081  451987 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 04:09:12.768993  451987 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:09:12.771907  451987 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	I1124 04:09:12.774775  451987 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 04:09:12.777748  451987 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 04:09:12.781485  451987 config.go:182] Loaded profile config "pause-396108": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:09:12.782101  451987 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 04:09:12.823554  451987 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 04:09:12.823677  451987 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:09:12.883750  451987 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-24 04:09:12.872982093 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:09:12.883889  451987 docker.go:319] overlay module found
	I1124 04:09:12.887347  451987 out.go:179] * Using the docker driver based on existing profile
	I1124 04:09:12.890241  451987 start.go:309] selected driver: docker
	I1124 04:09:12.890268  451987 start.go:927] validating driver "docker" against &{Name:pause-396108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-396108 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:09:12.890404  451987 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 04:09:12.890549  451987 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:09:12.951141  451987 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-24 04:09:12.941205945 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:09:12.951536  451987 cni.go:84] Creating CNI manager for ""
	I1124 04:09:12.951605  451987 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:09:12.951663  451987 start.go:353] cluster config:
	{Name:pause-396108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-396108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:09:12.954856  451987 out.go:179] * Starting "pause-396108" primary control-plane node in "pause-396108" cluster
	I1124 04:09:12.957641  451987 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 04:09:12.960702  451987 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 04:09:12.963567  451987 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:09:12.963636  451987 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 04:09:12.963670  451987 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 04:09:12.963683  451987 cache.go:65] Caching tarball of preloaded images
	I1124 04:09:12.963778  451987 preload.go:238] Found /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 04:09:12.963789  451987 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 04:09:12.963920  451987 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/pause-396108/config.json ...
	I1124 04:09:12.990019  451987 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 04:09:12.990043  451987 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 04:09:12.990062  451987 cache.go:243] Successfully downloaded all kic artifacts
	I1124 04:09:12.990090  451987 start.go:360] acquireMachinesLock for pause-396108: {Name:mk45a889be94844acd02a961e5f42591cb13ad56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 04:09:12.990160  451987 start.go:364] duration metric: took 43.365µs to acquireMachinesLock for "pause-396108"
	I1124 04:09:12.990183  451987 start.go:96] Skipping create...Using existing machine configuration
	I1124 04:09:12.990191  451987 fix.go:54] fixHost starting: 
	I1124 04:09:12.990504  451987 cli_runner.go:164] Run: docker container inspect pause-396108 --format={{.State.Status}}
	I1124 04:09:13.010803  451987 fix.go:112] recreateIfNeeded on pause-396108: state=Running err=<nil>
	W1124 04:09:13.010848  451987 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 04:09:13.014063  451987 out.go:252] * Updating the running docker "pause-396108" container ...
	I1124 04:09:13.014110  451987 machine.go:94] provisionDockerMachine start ...
	I1124 04:09:13.014194  451987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-396108
	I1124 04:09:13.034947  451987 main.go:143] libmachine: Using SSH client type: native
	I1124 04:09:13.035311  451987 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33396 <nil> <nil>}
	I1124 04:09:13.035326  451987 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 04:09:13.181968  451987 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-396108
	
	I1124 04:09:13.181995  451987 ubuntu.go:182] provisioning hostname "pause-396108"
	I1124 04:09:13.182068  451987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-396108
	I1124 04:09:13.205774  451987 main.go:143] libmachine: Using SSH client type: native
	I1124 04:09:13.206090  451987 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33396 <nil> <nil>}
	I1124 04:09:13.206106  451987 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-396108 && echo "pause-396108" | sudo tee /etc/hostname
	I1124 04:09:13.363087  451987 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-396108
	
	I1124 04:09:13.363212  451987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-396108
	I1124 04:09:13.381386  451987 main.go:143] libmachine: Using SSH client type: native
	I1124 04:09:13.381707  451987 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33396 <nil> <nil>}
	I1124 04:09:13.381730  451987 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-396108' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-396108/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-396108' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 04:09:13.540893  451987 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 04:09:13.540922  451987 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-289526/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-289526/.minikube}
	I1124 04:09:13.540981  451987 ubuntu.go:190] setting up certificates
	I1124 04:09:13.540992  451987 provision.go:84] configureAuth start
	I1124 04:09:13.541085  451987 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-396108
	I1124 04:09:13.569326  451987 provision.go:143] copyHostCerts
	I1124 04:09:13.569392  451987 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem, removing ...
	I1124 04:09:13.569406  451987 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem
	I1124 04:09:13.569488  451987 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem (1082 bytes)
	I1124 04:09:13.569614  451987 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem, removing ...
	I1124 04:09:13.569623  451987 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem
	I1124 04:09:13.569653  451987 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem (1123 bytes)
	I1124 04:09:13.569704  451987 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem, removing ...
	I1124 04:09:13.569708  451987 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem
	I1124 04:09:13.569736  451987 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem (1675 bytes)
	I1124 04:09:13.569791  451987 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem org=jenkins.pause-396108 san=[127.0.0.1 192.168.85.2 localhost minikube pause-396108]
	I1124 04:09:14.006301  451987 provision.go:177] copyRemoteCerts
	I1124 04:09:14.006411  451987 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 04:09:14.006493  451987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-396108
	I1124 04:09:14.028005  451987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33396 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/pause-396108/id_rsa Username:docker}
	I1124 04:09:14.144256  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 04:09:14.164482  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1124 04:09:14.184367  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 04:09:14.206927  451987 provision.go:87] duration metric: took 665.908716ms to configureAuth
	I1124 04:09:14.206956  451987 ubuntu.go:206] setting minikube options for container-runtime
	I1124 04:09:14.207230  451987 config.go:182] Loaded profile config "pause-396108": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:09:14.207402  451987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-396108
	I1124 04:09:14.227218  451987 main.go:143] libmachine: Using SSH client type: native
	I1124 04:09:14.227558  451987 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33396 <nil> <nil>}
	I1124 04:09:14.227580  451987 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 04:09:19.650538  451987 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 04:09:19.650565  451987 machine.go:97] duration metric: took 6.636446475s to provisionDockerMachine
	I1124 04:09:19.650577  451987 start.go:293] postStartSetup for "pause-396108" (driver="docker")
	I1124 04:09:19.650588  451987 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 04:09:19.650671  451987 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 04:09:19.650719  451987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-396108
	I1124 04:09:19.668078  451987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33396 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/pause-396108/id_rsa Username:docker}
	I1124 04:09:19.774557  451987 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 04:09:19.777828  451987 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 04:09:19.777856  451987 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 04:09:19.777867  451987 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/addons for local assets ...
	I1124 04:09:19.777919  451987 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/files for local assets ...
	I1124 04:09:19.777998  451987 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem -> 2913892.pem in /etc/ssl/certs
	I1124 04:09:19.778099  451987 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 04:09:19.785799  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:09:19.803579  451987 start.go:296] duration metric: took 152.986539ms for postStartSetup
	I1124 04:09:19.803679  451987 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 04:09:19.803744  451987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-396108
	I1124 04:09:19.820378  451987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33396 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/pause-396108/id_rsa Username:docker}
	I1124 04:09:19.919625  451987 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 04:09:19.924487  451987 fix.go:56] duration metric: took 6.934286176s for fixHost
	I1124 04:09:19.924515  451987 start.go:83] releasing machines lock for "pause-396108", held for 6.934343424s
	I1124 04:09:19.924583  451987 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-396108
	I1124 04:09:19.939840  451987 ssh_runner.go:195] Run: cat /version.json
	I1124 04:09:19.939868  451987 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 04:09:19.939917  451987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-396108
	I1124 04:09:19.939922  451987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-396108
	I1124 04:09:19.956671  451987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33396 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/pause-396108/id_rsa Username:docker}
	I1124 04:09:19.966642  451987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33396 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/pause-396108/id_rsa Username:docker}
	I1124 04:09:20.161296  451987 ssh_runner.go:195] Run: systemctl --version
	I1124 04:09:20.168131  451987 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 04:09:20.211941  451987 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 04:09:20.216129  451987 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 04:09:20.216221  451987 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 04:09:20.224585  451987 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 04:09:20.224661  451987 start.go:496] detecting cgroup driver to use...
	I1124 04:09:20.224712  451987 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 04:09:20.224763  451987 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 04:09:20.239105  451987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 04:09:20.252318  451987 docker.go:218] disabling cri-docker service (if available) ...
	I1124 04:09:20.252411  451987 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 04:09:20.268162  451987 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 04:09:20.281131  451987 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 04:09:20.421944  451987 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 04:09:20.587952  451987 docker.go:234] disabling docker service ...
	I1124 04:09:20.588069  451987 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 04:09:20.613735  451987 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 04:09:20.634621  451987 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 04:09:20.832850  451987 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 04:09:21.032053  451987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 04:09:21.048361  451987 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 04:09:21.067442  451987 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 04:09:21.067561  451987 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:09:21.077505  451987 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 04:09:21.077684  451987 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:09:21.087529  451987 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:09:21.097191  451987 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:09:21.107367  451987 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 04:09:21.116407  451987 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:09:21.128083  451987 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:09:21.136921  451987 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:09:21.145813  451987 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 04:09:21.153095  451987 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 04:09:21.160358  451987 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:09:21.307643  451987 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 04:09:21.519902  451987 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 04:09:21.520009  451987 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 04:09:21.524081  451987 start.go:564] Will wait 60s for crictl version
	I1124 04:09:21.524148  451987 ssh_runner.go:195] Run: which crictl
	I1124 04:09:21.528037  451987 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 04:09:21.561138  451987 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 04:09:21.561256  451987 ssh_runner.go:195] Run: crio --version
	I1124 04:09:21.591751  451987 ssh_runner.go:195] Run: crio --version
	I1124 04:09:21.629832  451987 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 04:09:21.632940  451987 cli_runner.go:164] Run: docker network inspect pause-396108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 04:09:21.648157  451987 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 04:09:21.652060  451987 kubeadm.go:884] updating cluster {Name:pause-396108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-396108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 04:09:21.652203  451987 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:09:21.652254  451987 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:09:21.689646  451987 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:09:21.689674  451987 crio.go:433] Images already preloaded, skipping extraction
	I1124 04:09:21.689730  451987 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:09:21.718864  451987 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:09:21.718932  451987 cache_images.go:86] Images are preloaded, skipping loading
	I1124 04:09:21.718947  451987 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1124 04:09:21.719046  451987 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-396108 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-396108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 04:09:21.719130  451987 ssh_runner.go:195] Run: crio config
	I1124 04:09:21.777126  451987 cni.go:84] Creating CNI manager for ""
	I1124 04:09:21.777154  451987 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:09:21.777177  451987 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 04:09:21.777219  451987 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-396108 NodeName:pause-396108 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 04:09:21.777385  451987 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-396108"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 04:09:21.777463  451987 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 04:09:21.784833  451987 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 04:09:21.784956  451987 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 04:09:21.792387  451987 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1124 04:09:21.815953  451987 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 04:09:21.831557  451987 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1124 04:09:21.855404  451987 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 04:09:21.862966  451987 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:09:22.169485  451987 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:09:22.187067  451987 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/pause-396108 for IP: 192.168.85.2
	I1124 04:09:22.187084  451987 certs.go:195] generating shared ca certs ...
	I1124 04:09:22.187100  451987 certs.go:227] acquiring lock for ca certs: {Name:mk13d1a72b5c7901cf99d88e558f62c6b8512807 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:09:22.187242  451987 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key
	I1124 04:09:22.187283  451987 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key
	I1124 04:09:22.187290  451987 certs.go:257] generating profile certs ...
	I1124 04:09:22.187372  451987 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/pause-396108/client.key
	I1124 04:09:22.187444  451987 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/pause-396108/apiserver.key.0991ffb6
	I1124 04:09:22.187485  451987 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/pause-396108/proxy-client.key
	I1124 04:09:22.187598  451987 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem (1338 bytes)
	W1124 04:09:22.187628  451987 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389_empty.pem, impossibly tiny 0 bytes
	I1124 04:09:22.187637  451987 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 04:09:22.187662  451987 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem (1082 bytes)
	I1124 04:09:22.187685  451987 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem (1123 bytes)
	I1124 04:09:22.187707  451987 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem (1675 bytes)
	I1124 04:09:22.187754  451987 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:09:22.188414  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 04:09:22.214114  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 04:09:22.244496  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 04:09:22.264271  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 04:09:22.286807  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/pause-396108/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1124 04:09:22.309966  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/pause-396108/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 04:09:22.341529  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/pause-396108/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 04:09:22.370650  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/pause-396108/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1124 04:09:22.399461  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /usr/share/ca-certificates/2913892.pem (1708 bytes)
	I1124 04:09:22.433686  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 04:09:22.467149  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem --> /usr/share/ca-certificates/291389.pem (1338 bytes)
	I1124 04:09:22.492716  451987 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 04:09:22.517092  451987 ssh_runner.go:195] Run: openssl version
	I1124 04:09:22.531578  451987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 04:09:22.543884  451987 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:09:22.554862  451987 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 03:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:09:22.554978  451987 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:09:22.607494  451987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 04:09:22.616099  451987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291389.pem && ln -fs /usr/share/ca-certificates/291389.pem /etc/ssl/certs/291389.pem"
	I1124 04:09:22.628603  451987 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291389.pem
	I1124 04:09:22.637879  451987 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 03:20 /usr/share/ca-certificates/291389.pem
	I1124 04:09:22.638001  451987 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291389.pem
	I1124 04:09:22.687076  451987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291389.pem /etc/ssl/certs/51391683.0"
	I1124 04:09:22.695583  451987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2913892.pem && ln -fs /usr/share/ca-certificates/2913892.pem /etc/ssl/certs/2913892.pem"
	I1124 04:09:22.704403  451987 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2913892.pem
	I1124 04:09:22.709443  451987 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 03:20 /usr/share/ca-certificates/2913892.pem
	I1124 04:09:22.709597  451987 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2913892.pem
	I1124 04:09:22.797497  451987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2913892.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 04:09:22.805843  451987 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 04:09:22.818610  451987 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 04:09:22.871698  451987 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 04:09:22.915141  451987 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 04:09:22.978390  451987 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 04:09:23.024028  451987 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 04:09:23.080054  451987 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 04:09:23.131338  451987 kubeadm.go:401] StartCluster: {Name:pause-396108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-396108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:09:23.131508  451987 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 04:09:23.131616  451987 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 04:09:23.190634  451987 cri.go:89] found id: "510de710da2ba8ea819f9bd7d6008fc188c474758dc0982bc622288bdaf88a09"
	I1124 04:09:23.190708  451987 cri.go:89] found id: "ec93103e13f8d61b3bbca237e6337d82343ed31e17f5875e3885f8f49c8ba154"
	I1124 04:09:23.190727  451987 cri.go:89] found id: "1b824daf1dec3f99a1d5c16e27c0cfc32418b57d8ec747134d3686b45610c51f"
	I1124 04:09:23.190749  451987 cri.go:89] found id: "9d6051a594d1510f507c94cbd06679c59fda5ce1b35b721c4b945044a6c20cff"
	I1124 04:09:23.190783  451987 cri.go:89] found id: "5774c046abe9043aadbfd6b9c4831cb9ed1b5f813e056e0057a80407ea6f6d2f"
	I1124 04:09:23.190808  451987 cri.go:89] found id: "e043bea1c710a5ce21da1af2f69f48cc7f408d94c02ad317ef742420e6047668"
	I1124 04:09:23.190829  451987 cri.go:89] found id: "848b7a5b1960bf771106d3ade5f36482eb5247f3fdffae808cd1b74ec8b48cb5"
	I1124 04:09:23.190851  451987 cri.go:89] found id: "490bbd4ce436cb05c5881f746e8020778291193555fa5f32a43ae3598eddbd0d"
	I1124 04:09:23.190884  451987 cri.go:89] found id: "c8a48af8745d6e7ed60d98860bbdd720ffd910fb5f4ca540179f0c1ded57c194"
	I1124 04:09:23.190911  451987 cri.go:89] found id: "11beefbf1a8a00864fe3381844d37aedc1f9ce9831b394bb7bdc1ded4239d89e"
	I1124 04:09:23.190929  451987 cri.go:89] found id: "c7f699f1cc0ef484fe9224d86d2fb5cdd924b5f1e89ed3e12a85d92c72cc378e"
	I1124 04:09:23.190950  451987 cri.go:89] found id: "d013da0147b8934c86c12720efd80ec59f61d17784274267bb11bc96acac4c94"
	I1124 04:09:23.190984  451987 cri.go:89] found id: "5b712eafb8e889f1f03dcbca8e8cff4e20805686a79699de0f7ac21affd0b9f9"
	I1124 04:09:23.191008  451987 cri.go:89] found id: "94b696efd6b4382c567f3d5ef6f1fd9532fd20a48c4c2f66e053c1586cb5b17e"
	I1124 04:09:23.191026  451987 cri.go:89] found id: ""
	I1124 04:09:23.191110  451987 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 04:09:23.211676  451987 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:09:23Z" level=error msg="open /run/runc: no such file or directory"
	I1124 04:09:23.211754  451987 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 04:09:23.221439  451987 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 04:09:23.221509  451987 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 04:09:23.221602  451987 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 04:09:23.236200  451987 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 04:09:23.236878  451987 kubeconfig.go:125] found "pause-396108" server: "https://192.168.85.2:8443"
	I1124 04:09:23.237736  451987 kapi.go:59] client config for pause-396108: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21975-289526/.minikube/profiles/pause-396108/client.crt", KeyFile:"/home/jenkins/minikube-integration/21975-289526/.minikube/profiles/pause-396108/client.key", CAFile:"/home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 04:09:23.238297  451987 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1124 04:09:23.238545  451987 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1124 04:09:23.238571  451987 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1124 04:09:23.238617  451987 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1124 04:09:23.238645  451987 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1124 04:09:23.238962  451987 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 04:09:23.251749  451987 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1124 04:09:23.251831  451987 kubeadm.go:602] duration metric: took 30.301581ms to restartPrimaryControlPlane
	I1124 04:09:23.251856  451987 kubeadm.go:403] duration metric: took 120.528069ms to StartCluster
	I1124 04:09:23.251898  451987 settings.go:142] acquiring lock: {Name:mkbb4fe81234aed150705ba63a85f83232f71688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:09:23.251985  451987 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:09:23.252821  451987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/kubeconfig: {Name:mkb927d54c1c5489b1562496c31c4a11f46f8c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:09:23.253095  451987 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 04:09:23.253460  451987 config.go:182] Loaded profile config "pause-396108": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:09:23.253613  451987 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 04:09:23.257009  451987 out.go:179] * Verifying Kubernetes components...
	I1124 04:09:23.257106  451987 out.go:179] * Enabled addons: 
	I1124 04:09:20.558358  437115 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.072222249s)
	W1124 04:09:20.558401  437115 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1124 04:09:20.558410  437115 logs.go:123] Gathering logs for kube-apiserver [171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1] ...
	I1124 04:09:20.558422  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1"
	I1124 04:09:20.600811  437115 logs.go:123] Gathering logs for kube-scheduler [25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87] ...
	I1124 04:09:20.600887  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87"
	I1124 04:09:20.697809  437115 logs.go:123] Gathering logs for kube-controller-manager [01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c] ...
	I1124 04:09:20.697854  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c"
	W1124 04:09:20.745018  437115 logs.go:130] failed kube-controller-manager [01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c": Process exited with status 1
	stdout:
	
	stderr:
	E1124 04:09:20.742195    4120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c\": container with ID starting with 01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c not found: ID does not exist" containerID="01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c"
	time="2025-11-24T04:09:20Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c\": container with ID starting with 01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c not found: ID does not exist"
	 output: 
	** stderr ** 
	E1124 04:09:20.742195    4120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c\": container with ID starting with 01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c not found: ID does not exist" containerID="01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c"
	time="2025-11-24T04:09:20Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c\": container with ID starting with 01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c not found: ID does not exist"
	
	** /stderr **
	I1124 04:09:20.745044  437115 logs.go:123] Gathering logs for CRI-O ...
	I1124 04:09:20.745063  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 04:09:20.822834  437115 logs.go:123] Gathering logs for kube-apiserver [c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9] ...
	I1124 04:09:20.822889  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9"
	I1124 04:09:20.862884  437115 logs.go:123] Gathering logs for container status ...
	I1124 04:09:20.862924  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 04:09:20.915684  437115 logs.go:123] Gathering logs for kubelet ...
	I1124 04:09:20.915714  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 04:09:21.053711  437115 logs.go:123] Gathering logs for dmesg ...
	I1124 04:09:21.053747  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 04:09:23.575091  437115 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 04:09:24.911573  437115 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:49602->192.168.76.2:8443: read: connection reset by peer
	I1124 04:09:24.911628  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 04:09:24.911688  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 04:09:24.968564  437115 cri.go:89] found id: "171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1"
	I1124 04:09:24.968584  437115 cri.go:89] found id: "c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9"
	I1124 04:09:24.968589  437115 cri.go:89] found id: ""
	I1124 04:09:24.968597  437115 logs.go:282] 2 containers: [171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1 c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9]
	I1124 04:09:24.968653  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:24.974792  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:24.982271  437115 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 04:09:24.982342  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 04:09:25.040923  437115 cri.go:89] found id: ""
	I1124 04:09:25.040946  437115 logs.go:282] 0 containers: []
	W1124 04:09:25.040954  437115 logs.go:284] No container was found matching "etcd"
	I1124 04:09:25.040960  437115 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 04:09:25.041019  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 04:09:25.109589  437115 cri.go:89] found id: ""
	I1124 04:09:25.109623  437115 logs.go:282] 0 containers: []
	W1124 04:09:25.109632  437115 logs.go:284] No container was found matching "coredns"
	I1124 04:09:25.109638  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 04:09:25.109699  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 04:09:25.154610  437115 cri.go:89] found id: "25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87"
	I1124 04:09:25.154692  437115 cri.go:89] found id: ""
	I1124 04:09:25.154717  437115 logs.go:282] 1 containers: [25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87]
	I1124 04:09:25.154812  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:25.163612  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 04:09:25.163684  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 04:09:25.201515  437115 cri.go:89] found id: ""
	I1124 04:09:25.201543  437115 logs.go:282] 0 containers: []
	W1124 04:09:25.201553  437115 logs.go:284] No container was found matching "kube-proxy"
	I1124 04:09:25.201559  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 04:09:25.201630  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 04:09:25.248296  437115 cri.go:89] found id: "c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32"
	I1124 04:09:25.248316  437115 cri.go:89] found id: ""
	I1124 04:09:25.248323  437115 logs.go:282] 1 containers: [c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32]
	I1124 04:09:25.248379  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:25.252625  437115 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 04:09:25.252697  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 04:09:25.308993  437115 cri.go:89] found id: ""
	I1124 04:09:25.309017  437115 logs.go:282] 0 containers: []
	W1124 04:09:25.309028  437115 logs.go:284] No container was found matching "kindnet"
	I1124 04:09:25.309034  437115 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 04:09:25.309098  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 04:09:25.364253  437115 cri.go:89] found id: ""
	I1124 04:09:25.364331  437115 logs.go:282] 0 containers: []
	W1124 04:09:25.364343  437115 logs.go:284] No container was found matching "storage-provisioner"
	I1124 04:09:25.364358  437115 logs.go:123] Gathering logs for dmesg ...
	I1124 04:09:25.364371  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 04:09:25.388044  437115 logs.go:123] Gathering logs for describe nodes ...
	I1124 04:09:25.388129  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 04:09:25.506329  437115 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 04:09:25.506349  437115 logs.go:123] Gathering logs for kube-apiserver [171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1] ...
	I1124 04:09:25.506365  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1"
	I1124 04:09:23.259960  451987 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:09:23.260086  451987 addons.go:530] duration metric: took 6.475851ms for enable addons: enabled=[]
	I1124 04:09:23.523972  451987 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:09:23.540488  451987 node_ready.go:35] waiting up to 6m0s for node "pause-396108" to be "Ready" ...
	I1124 04:09:26.564421  451987 node_ready.go:49] node "pause-396108" is "Ready"
	I1124 04:09:26.564448  451987 node_ready.go:38] duration metric: took 3.023906913s for node "pause-396108" to be "Ready" ...
	I1124 04:09:26.564461  451987 api_server.go:52] waiting for apiserver process to appear ...
	I1124 04:09:26.564521  451987 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 04:09:26.582528  451987 api_server.go:72] duration metric: took 3.329374699s to wait for apiserver process to appear ...
	I1124 04:09:26.582553  451987 api_server.go:88] waiting for apiserver healthz status ...
	I1124 04:09:26.582572  451987 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 04:09:26.646180  451987 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 04:09:26.646266  451987 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 04:09:27.082701  451987 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 04:09:27.095526  451987 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 04:09:27.095609  451987 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 04:09:27.583268  451987 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 04:09:27.591393  451987 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 04:09:27.592474  451987 api_server.go:141] control plane version: v1.34.1
	I1124 04:09:27.592499  451987 api_server.go:131] duration metric: took 1.009938777s to wait for apiserver health ...
	I1124 04:09:27.592507  451987 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 04:09:27.595826  451987 system_pods.go:59] 7 kube-system pods found
	I1124 04:09:27.595866  451987 system_pods.go:61] "coredns-66bc5c9577-xfr6t" [fd71cd99-c8ae-4289-91db-9e0d7fe80820] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:09:27.595876  451987 system_pods.go:61] "etcd-pause-396108" [8cf88e79-6c75-41a5-8054-64ab30eed960] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 04:09:27.595881  451987 system_pods.go:61] "kindnet-mfqdh" [bb8e0be7-54b6-4486-9171-829f7caa1732] Running
	I1124 04:09:27.595889  451987 system_pods.go:61] "kube-apiserver-pause-396108" [cfca4498-6686-4773-8518-85bed07245bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 04:09:27.595900  451987 system_pods.go:61] "kube-controller-manager-pause-396108" [929bd265-8894-4dea-aada-003d5f8bb490] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 04:09:27.595909  451987 system_pods.go:61] "kube-proxy-scjq4" [55cb7252-6c8b-4499-8353-ffca1a4f06d1] Running
	I1124 04:09:27.595915  451987 system_pods.go:61] "kube-scheduler-pause-396108" [e5874182-3ff1-46f0-9d2f-38553f584bd3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 04:09:27.595922  451987 system_pods.go:74] duration metric: took 3.407709ms to wait for pod list to return data ...
	I1124 04:09:27.595931  451987 default_sa.go:34] waiting for default service account to be created ...
	I1124 04:09:27.600102  451987 default_sa.go:45] found service account: "default"
	I1124 04:09:27.600131  451987 default_sa.go:55] duration metric: took 4.190835ms for default service account to be created ...
	I1124 04:09:27.600142  451987 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 04:09:27.603016  451987 system_pods.go:86] 7 kube-system pods found
	I1124 04:09:27.603048  451987 system_pods.go:89] "coredns-66bc5c9577-xfr6t" [fd71cd99-c8ae-4289-91db-9e0d7fe80820] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:09:27.603061  451987 system_pods.go:89] "etcd-pause-396108" [8cf88e79-6c75-41a5-8054-64ab30eed960] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 04:09:27.603067  451987 system_pods.go:89] "kindnet-mfqdh" [bb8e0be7-54b6-4486-9171-829f7caa1732] Running
	I1124 04:09:27.603073  451987 system_pods.go:89] "kube-apiserver-pause-396108" [cfca4498-6686-4773-8518-85bed07245bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 04:09:27.603080  451987 system_pods.go:89] "kube-controller-manager-pause-396108" [929bd265-8894-4dea-aada-003d5f8bb490] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 04:09:27.603085  451987 system_pods.go:89] "kube-proxy-scjq4" [55cb7252-6c8b-4499-8353-ffca1a4f06d1] Running
	I1124 04:09:27.603099  451987 system_pods.go:89] "kube-scheduler-pause-396108" [e5874182-3ff1-46f0-9d2f-38553f584bd3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 04:09:27.603114  451987 system_pods.go:126] duration metric: took 2.964813ms to wait for k8s-apps to be running ...
	I1124 04:09:27.603122  451987 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 04:09:27.603181  451987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:09:27.616923  451987 system_svc.go:56] duration metric: took 13.791257ms WaitForService to wait for kubelet
	I1124 04:09:27.616993  451987 kubeadm.go:587] duration metric: took 4.363843594s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 04:09:27.617026  451987 node_conditions.go:102] verifying NodePressure condition ...
	I1124 04:09:27.619863  451987 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 04:09:27.619904  451987 node_conditions.go:123] node cpu capacity is 2
	I1124 04:09:27.619917  451987 node_conditions.go:105] duration metric: took 2.873523ms to run NodePressure ...
	I1124 04:09:27.619929  451987 start.go:242] waiting for startup goroutines ...
	I1124 04:09:27.619937  451987 start.go:247] waiting for cluster config update ...
	I1124 04:09:27.619945  451987 start.go:256] writing updated cluster config ...
	I1124 04:09:27.620265  451987 ssh_runner.go:195] Run: rm -f paused
	I1124 04:09:27.623675  451987 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 04:09:27.624315  451987 kapi.go:59] client config for pause-396108: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21975-289526/.minikube/profiles/pause-396108/client.crt", KeyFile:"/home/jenkins/minikube-integration/21975-289526/.minikube/profiles/pause-396108/client.key", CAFile:"/home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 04:09:27.627288  451987 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xfr6t" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:09:25.564193  437115 logs.go:123] Gathering logs for kube-apiserver [c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9] ...
	I1124 04:09:25.564269  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9"
	W1124 04:09:25.613626  437115 logs.go:130] failed kube-apiserver [c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9": Process exited with status 1
	stdout:
	
	stderr:
	E1124 04:09:25.604526    4237 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9\": container with ID starting with c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9 not found: ID does not exist" containerID="c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9"
	time="2025-11-24T04:09:25Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9\": container with ID starting with c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1124 04:09:25.604526    4237 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9\": container with ID starting with c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9 not found: ID does not exist" containerID="c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9"
	time="2025-11-24T04:09:25Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9\": container with ID starting with c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9 not found: ID does not exist"
	
	** /stderr **
	I1124 04:09:25.613646  437115 logs.go:123] Gathering logs for kube-scheduler [25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87] ...
	I1124 04:09:25.613658  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87"
	I1124 04:09:25.733745  437115 logs.go:123] Gathering logs for kube-controller-manager [c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32] ...
	I1124 04:09:25.733827  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32"
	I1124 04:09:25.779622  437115 logs.go:123] Gathering logs for CRI-O ...
	I1124 04:09:25.779658  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 04:09:25.883437  437115 logs.go:123] Gathering logs for container status ...
	I1124 04:09:25.883530  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 04:09:25.938708  437115 logs.go:123] Gathering logs for kubelet ...
	I1124 04:09:25.938737  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 04:09:28.595176  437115 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 04:09:28.595637  437115 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 04:09:28.595702  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 04:09:28.595779  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 04:09:28.626849  437115 cri.go:89] found id: "171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1"
	I1124 04:09:28.626877  437115 cri.go:89] found id: ""
	I1124 04:09:28.626892  437115 logs.go:282] 1 containers: [171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1]
	I1124 04:09:28.626949  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:28.634029  437115 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 04:09:28.634100  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 04:09:28.663129  437115 cri.go:89] found id: ""
	I1124 04:09:28.663157  437115 logs.go:282] 0 containers: []
	W1124 04:09:28.663166  437115 logs.go:284] No container was found matching "etcd"
	I1124 04:09:28.663172  437115 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 04:09:28.663232  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 04:09:28.688278  437115 cri.go:89] found id: ""
	I1124 04:09:28.688306  437115 logs.go:282] 0 containers: []
	W1124 04:09:28.688316  437115 logs.go:284] No container was found matching "coredns"
	I1124 04:09:28.688323  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 04:09:28.688383  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 04:09:28.721734  437115 cri.go:89] found id: "25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87"
	I1124 04:09:28.721758  437115 cri.go:89] found id: ""
	I1124 04:09:28.721767  437115 logs.go:282] 1 containers: [25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87]
	I1124 04:09:28.721833  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:28.725914  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 04:09:28.725987  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 04:09:28.757735  437115 cri.go:89] found id: ""
	I1124 04:09:28.757756  437115 logs.go:282] 0 containers: []
	W1124 04:09:28.757764  437115 logs.go:284] No container was found matching "kube-proxy"
	I1124 04:09:28.757769  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 04:09:28.757852  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 04:09:28.783062  437115 cri.go:89] found id: "c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32"
	I1124 04:09:28.783084  437115 cri.go:89] found id: ""
	I1124 04:09:28.783093  437115 logs.go:282] 1 containers: [c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32]
	I1124 04:09:28.783148  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:28.786856  437115 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 04:09:28.786971  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 04:09:28.818146  437115 cri.go:89] found id: ""
	I1124 04:09:28.818172  437115 logs.go:282] 0 containers: []
	W1124 04:09:28.818181  437115 logs.go:284] No container was found matching "kindnet"
	I1124 04:09:28.818187  437115 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 04:09:28.818247  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 04:09:28.844007  437115 cri.go:89] found id: ""
	I1124 04:09:28.844032  437115 logs.go:282] 0 containers: []
	W1124 04:09:28.844041  437115 logs.go:284] No container was found matching "storage-provisioner"
	I1124 04:09:28.844050  437115 logs.go:123] Gathering logs for container status ...
	I1124 04:09:28.844081  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 04:09:28.885287  437115 logs.go:123] Gathering logs for kubelet ...
	I1124 04:09:28.885322  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 04:09:29.022094  437115 logs.go:123] Gathering logs for dmesg ...
	I1124 04:09:29.022138  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 04:09:29.053406  437115 logs.go:123] Gathering logs for describe nodes ...
	I1124 04:09:29.053438  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 04:09:29.150799  437115 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 04:09:29.150868  437115 logs.go:123] Gathering logs for kube-apiserver [171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1] ...
	I1124 04:09:29.150897  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1"
	I1124 04:09:29.198785  437115 logs.go:123] Gathering logs for kube-scheduler [25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87] ...
	I1124 04:09:29.198817  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87"
	I1124 04:09:29.263761  437115 logs.go:123] Gathering logs for kube-controller-manager [c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32] ...
	I1124 04:09:29.263800  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32"
	I1124 04:09:29.291266  437115 logs.go:123] Gathering logs for CRI-O ...
	I1124 04:09:29.291293  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1124 04:09:29.634491  451987 pod_ready.go:104] pod "coredns-66bc5c9577-xfr6t" is not "Ready", error: <nil>
	W1124 04:09:32.134245  451987 pod_ready.go:104] pod "coredns-66bc5c9577-xfr6t" is not "Ready", error: <nil>
	I1124 04:09:31.859503  437115 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 04:09:31.859955  437115 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 04:09:31.860003  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 04:09:31.860064  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 04:09:31.886133  437115 cri.go:89] found id: "171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1"
	I1124 04:09:31.886153  437115 cri.go:89] found id: ""
	I1124 04:09:31.886161  437115 logs.go:282] 1 containers: [171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1]
	I1124 04:09:31.886220  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:31.889849  437115 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 04:09:31.889920  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 04:09:31.920033  437115 cri.go:89] found id: ""
	I1124 04:09:31.920064  437115 logs.go:282] 0 containers: []
	W1124 04:09:31.920077  437115 logs.go:284] No container was found matching "etcd"
	I1124 04:09:31.920083  437115 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 04:09:31.920144  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 04:09:31.946194  437115 cri.go:89] found id: ""
	I1124 04:09:31.946217  437115 logs.go:282] 0 containers: []
	W1124 04:09:31.946225  437115 logs.go:284] No container was found matching "coredns"
	I1124 04:09:31.946231  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 04:09:31.946288  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 04:09:31.976121  437115 cri.go:89] found id: "25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87"
	I1124 04:09:31.976191  437115 cri.go:89] found id: ""
	I1124 04:09:31.976215  437115 logs.go:282] 1 containers: [25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87]
	I1124 04:09:31.976291  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:31.980158  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 04:09:31.980246  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 04:09:32.009856  437115 cri.go:89] found id: ""
	I1124 04:09:32.009881  437115 logs.go:282] 0 containers: []
	W1124 04:09:32.009890  437115 logs.go:284] No container was found matching "kube-proxy"
	I1124 04:09:32.009896  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 04:09:32.009986  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 04:09:32.038205  437115 cri.go:89] found id: "c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32"
	I1124 04:09:32.038230  437115 cri.go:89] found id: ""
	I1124 04:09:32.038240  437115 logs.go:282] 1 containers: [c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32]
	I1124 04:09:32.038303  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:32.042322  437115 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 04:09:32.042497  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 04:09:32.069834  437115 cri.go:89] found id: ""
	I1124 04:09:32.069863  437115 logs.go:282] 0 containers: []
	W1124 04:09:32.069873  437115 logs.go:284] No container was found matching "kindnet"
	I1124 04:09:32.069879  437115 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 04:09:32.069944  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 04:09:32.097573  437115 cri.go:89] found id: ""
	I1124 04:09:32.097601  437115 logs.go:282] 0 containers: []
	W1124 04:09:32.097611  437115 logs.go:284] No container was found matching "storage-provisioner"
	I1124 04:09:32.097621  437115 logs.go:123] Gathering logs for CRI-O ...
	I1124 04:09:32.097638  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 04:09:32.159073  437115 logs.go:123] Gathering logs for container status ...
	I1124 04:09:32.159108  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 04:09:32.192530  437115 logs.go:123] Gathering logs for kubelet ...
	I1124 04:09:32.192560  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 04:09:32.309268  437115 logs.go:123] Gathering logs for dmesg ...
	I1124 04:09:32.309310  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 04:09:32.325747  437115 logs.go:123] Gathering logs for describe nodes ...
	I1124 04:09:32.325778  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 04:09:32.396957  437115 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 04:09:32.397020  437115 logs.go:123] Gathering logs for kube-apiserver [171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1] ...
	I1124 04:09:32.397041  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1"
	I1124 04:09:32.434948  437115 logs.go:123] Gathering logs for kube-scheduler [25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87] ...
	I1124 04:09:32.434982  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87"
	I1124 04:09:32.501225  437115 logs.go:123] Gathering logs for kube-controller-manager [c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32] ...
	I1124 04:09:32.501267  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32"
	I1124 04:09:35.030519  437115 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 04:09:35.031012  437115 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 04:09:35.031061  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 04:09:35.031119  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 04:09:35.059986  437115 cri.go:89] found id: "171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1"
	I1124 04:09:35.060009  437115 cri.go:89] found id: ""
	I1124 04:09:35.060018  437115 logs.go:282] 1 containers: [171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1]
	I1124 04:09:35.060079  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:35.064189  437115 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 04:09:35.064267  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 04:09:35.091210  437115 cri.go:89] found id: ""
	I1124 04:09:35.091237  437115 logs.go:282] 0 containers: []
	W1124 04:09:35.091254  437115 logs.go:284] No container was found matching "etcd"
	I1124 04:09:35.091260  437115 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 04:09:35.091321  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 04:09:35.123976  437115 cri.go:89] found id: ""
	I1124 04:09:35.123999  437115 logs.go:282] 0 containers: []
	W1124 04:09:35.124007  437115 logs.go:284] No container was found matching "coredns"
	I1124 04:09:35.124013  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 04:09:35.124071  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 04:09:35.159072  437115 cri.go:89] found id: "25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87"
	I1124 04:09:35.159093  437115 cri.go:89] found id: ""
	I1124 04:09:35.159101  437115 logs.go:282] 1 containers: [25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87]
	I1124 04:09:35.159157  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:35.163073  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 04:09:35.163151  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 04:09:35.189626  437115 cri.go:89] found id: ""
	I1124 04:09:35.189657  437115 logs.go:282] 0 containers: []
	W1124 04:09:35.189667  437115 logs.go:284] No container was found matching "kube-proxy"
	I1124 04:09:35.189673  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 04:09:35.189734  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 04:09:35.215678  437115 cri.go:89] found id: "c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32"
	I1124 04:09:35.215705  437115 cri.go:89] found id: ""
	I1124 04:09:35.215715  437115 logs.go:282] 1 containers: [c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32]
	I1124 04:09:35.215775  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:35.219722  437115 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 04:09:35.219828  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 04:09:35.253188  437115 cri.go:89] found id: ""
	I1124 04:09:35.253215  437115 logs.go:282] 0 containers: []
	W1124 04:09:35.253224  437115 logs.go:284] No container was found matching "kindnet"
	I1124 04:09:35.253231  437115 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 04:09:35.253294  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 04:09:35.279170  437115 cri.go:89] found id: ""
	I1124 04:09:35.279196  437115 logs.go:282] 0 containers: []
	W1124 04:09:35.279206  437115 logs.go:284] No container was found matching "storage-provisioner"
	I1124 04:09:35.279216  437115 logs.go:123] Gathering logs for dmesg ...
	I1124 04:09:35.279228  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 04:09:35.295680  437115 logs.go:123] Gathering logs for describe nodes ...
	I1124 04:09:35.295712  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 04:09:35.366375  437115 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 04:09:35.366399  437115 logs.go:123] Gathering logs for kube-apiserver [171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1] ...
	I1124 04:09:35.366417  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1"
	I1124 04:09:35.398651  437115 logs.go:123] Gathering logs for kube-scheduler [25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87] ...
	I1124 04:09:35.398683  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87"
	I1124 04:09:35.464542  437115 logs.go:123] Gathering logs for kube-controller-manager [c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32] ...
	I1124 04:09:35.464579  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32"
	I1124 04:09:35.494347  437115 logs.go:123] Gathering logs for CRI-O ...
	I1124 04:09:35.494376  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 04:09:34.133423  451987 pod_ready.go:94] pod "coredns-66bc5c9577-xfr6t" is "Ready"
	I1124 04:09:34.133511  451987 pod_ready.go:86] duration metric: took 6.506197483s for pod "coredns-66bc5c9577-xfr6t" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:09:34.136671  451987 pod_ready.go:83] waiting for pod "etcd-pause-396108" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:09:36.142219  451987 pod_ready.go:94] pod "etcd-pause-396108" is "Ready"
	I1124 04:09:36.142247  451987 pod_ready.go:86] duration metric: took 2.005549635s for pod "etcd-pause-396108" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:09:36.144567  451987 pod_ready.go:83] waiting for pod "kube-apiserver-pause-396108" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:09:36.149525  451987 pod_ready.go:94] pod "kube-apiserver-pause-396108" is "Ready"
	I1124 04:09:36.149555  451987 pod_ready.go:86] duration metric: took 4.958896ms for pod "kube-apiserver-pause-396108" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:09:36.151929  451987 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-396108" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:09:35.557314  437115 logs.go:123] Gathering logs for container status ...
	I1124 04:09:35.557348  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 04:09:35.591325  437115 logs.go:123] Gathering logs for kubelet ...
	I1124 04:09:35.591356  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 04:09:38.212664  437115 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 04:09:38.213101  437115 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 04:09:38.213164  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 04:09:38.213240  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 04:09:38.239721  437115 cri.go:89] found id: "171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1"
	I1124 04:09:38.239743  437115 cri.go:89] found id: ""
	I1124 04:09:38.239752  437115 logs.go:282] 1 containers: [171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1]
	I1124 04:09:38.239812  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:38.243591  437115 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 04:09:38.243673  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 04:09:38.270346  437115 cri.go:89] found id: ""
	I1124 04:09:38.270370  437115 logs.go:282] 0 containers: []
	W1124 04:09:38.270379  437115 logs.go:284] No container was found matching "etcd"
	I1124 04:09:38.270386  437115 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 04:09:38.270444  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 04:09:38.296322  437115 cri.go:89] found id: ""
	I1124 04:09:38.296346  437115 logs.go:282] 0 containers: []
	W1124 04:09:38.296355  437115 logs.go:284] No container was found matching "coredns"
	I1124 04:09:38.296361  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 04:09:38.296422  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 04:09:38.322556  437115 cri.go:89] found id: "25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87"
	I1124 04:09:38.322586  437115 cri.go:89] found id: ""
	I1124 04:09:38.322596  437115 logs.go:282] 1 containers: [25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87]
	I1124 04:09:38.322651  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:38.326350  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 04:09:38.326427  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 04:09:38.354850  437115 cri.go:89] found id: ""
	I1124 04:09:38.354874  437115 logs.go:282] 0 containers: []
	W1124 04:09:38.354884  437115 logs.go:284] No container was found matching "kube-proxy"
	I1124 04:09:38.354891  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 04:09:38.354951  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 04:09:38.382626  437115 cri.go:89] found id: "c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32"
	I1124 04:09:38.382649  437115 cri.go:89] found id: ""
	I1124 04:09:38.382658  437115 logs.go:282] 1 containers: [c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32]
	I1124 04:09:38.382714  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:38.386591  437115 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 04:09:38.386727  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 04:09:38.413321  437115 cri.go:89] found id: ""
	I1124 04:09:38.413349  437115 logs.go:282] 0 containers: []
	W1124 04:09:38.413358  437115 logs.go:284] No container was found matching "kindnet"
	I1124 04:09:38.413370  437115 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 04:09:38.413434  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 04:09:38.447096  437115 cri.go:89] found id: ""
	I1124 04:09:38.447118  437115 logs.go:282] 0 containers: []
	W1124 04:09:38.447127  437115 logs.go:284] No container was found matching "storage-provisioner"
	I1124 04:09:38.447135  437115 logs.go:123] Gathering logs for describe nodes ...
	I1124 04:09:38.447147  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 04:09:38.517008  437115 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 04:09:38.517031  437115 logs.go:123] Gathering logs for kube-apiserver [171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1] ...
	I1124 04:09:38.517046  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1"
	I1124 04:09:38.556318  437115 logs.go:123] Gathering logs for kube-scheduler [25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87] ...
	I1124 04:09:38.556350  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87"
	I1124 04:09:38.616511  437115 logs.go:123] Gathering logs for kube-controller-manager [c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32] ...
	I1124 04:09:38.616548  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32"
	I1124 04:09:38.644755  437115 logs.go:123] Gathering logs for CRI-O ...
	I1124 04:09:38.644835  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 04:09:38.709999  437115 logs.go:123] Gathering logs for container status ...
	I1124 04:09:38.710039  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 04:09:38.741969  437115 logs.go:123] Gathering logs for kubelet ...
	I1124 04:09:38.741998  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 04:09:38.869515  437115 logs.go:123] Gathering logs for dmesg ...
	I1124 04:09:38.869563  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1124 04:09:38.157420  451987 pod_ready.go:104] pod "kube-controller-manager-pause-396108" is not "Ready", error: <nil>
	W1124 04:09:40.158124  451987 pod_ready.go:104] pod "kube-controller-manager-pause-396108" is not "Ready", error: <nil>
	I1124 04:09:41.657944  451987 pod_ready.go:94] pod "kube-controller-manager-pause-396108" is "Ready"
	I1124 04:09:41.657968  451987 pod_ready.go:86] duration metric: took 5.506012123s for pod "kube-controller-manager-pause-396108" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:09:41.660568  451987 pod_ready.go:83] waiting for pod "kube-proxy-scjq4" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:09:41.665187  451987 pod_ready.go:94] pod "kube-proxy-scjq4" is "Ready"
	I1124 04:09:41.665253  451987 pod_ready.go:86] duration metric: took 4.662768ms for pod "kube-proxy-scjq4" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:09:41.677066  451987 pod_ready.go:83] waiting for pod "kube-scheduler-pause-396108" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:09:41.693528  451987 pod_ready.go:94] pod "kube-scheduler-pause-396108" is "Ready"
	I1124 04:09:41.693552  451987 pod_ready.go:86] duration metric: took 16.463912ms for pod "kube-scheduler-pause-396108" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:09:41.693565  451987 pod_ready.go:40] duration metric: took 14.069859482s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 04:09:41.793418  451987 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 04:09:41.796352  451987 out.go:179] * Done! kubectl is now configured to use "pause-396108" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 04:09:22 pause-396108 crio[2069]: time="2025-11-24T04:09:22.025833799Z" level=info msg="Started container" PID=2217 containerID=5774c046abe9043aadbfd6b9c4831cb9ed1b5f813e056e0057a80407ea6f6d2f description=kube-system/coredns-66bc5c9577-xfr6t/coredns id=ccb5055d-f8fd-481a-9883-bd0890d41282 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4840c1c8edc3b4d01ae73cc6e1cf4fc0e1670d5b6a16d2e31fbbbaa140221352
	Nov 24 04:09:22 pause-396108 crio[2069]: time="2025-11-24T04:09:22.029977126Z" level=info msg="Created container e043bea1c710a5ce21da1af2f69f48cc7f408d94c02ad317ef742420e6047668: kube-system/kube-scheduler-pause-396108/kube-scheduler" id=e3ac6885-03ab-44c9-852b-3c24c77a3801 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:09:22 pause-396108 crio[2069]: time="2025-11-24T04:09:22.030596877Z" level=info msg="Starting container: e043bea1c710a5ce21da1af2f69f48cc7f408d94c02ad317ef742420e6047668" id=d506ed95-3217-4264-b49b-dfec18cb4619 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:09:22 pause-396108 crio[2069]: time="2025-11-24T04:09:22.036393835Z" level=info msg="Created container 1b824daf1dec3f99a1d5c16e27c0cfc32418b57d8ec747134d3686b45610c51f: kube-system/etcd-pause-396108/etcd" id=a4f2fb11-190f-4ad3-a2f0-d83a1435350a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:09:22 pause-396108 crio[2069]: time="2025-11-24T04:09:22.037613965Z" level=info msg="Starting container: 1b824daf1dec3f99a1d5c16e27c0cfc32418b57d8ec747134d3686b45610c51f" id=49bc72bc-5d01-4367-9a8f-85c843fb34bf name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:09:22 pause-396108 crio[2069]: time="2025-11-24T04:09:22.049224658Z" level=info msg="Started container" PID=2222 containerID=1b824daf1dec3f99a1d5c16e27c0cfc32418b57d8ec747134d3686b45610c51f description=kube-system/etcd-pause-396108/etcd id=49bc72bc-5d01-4367-9a8f-85c843fb34bf name=/runtime.v1.RuntimeService/StartContainer sandboxID=c53edc38cd4aaed5adc34ba312db7610711e2b097c869bb7876b5d8602eb0493
	Nov 24 04:09:22 pause-396108 crio[2069]: time="2025-11-24T04:09:22.049788729Z" level=info msg="Started container" PID=2219 containerID=e043bea1c710a5ce21da1af2f69f48cc7f408d94c02ad317ef742420e6047668 description=kube-system/kube-scheduler-pause-396108/kube-scheduler id=d506ed95-3217-4264-b49b-dfec18cb4619 name=/runtime.v1.RuntimeService/StartContainer sandboxID=26b0fa11607255a3e5608e30feda103f23f01293cb1c4a1084043152795c9a66
	Nov 24 04:09:22 pause-396108 crio[2069]: time="2025-11-24T04:09:22.090143613Z" level=info msg="Created container ec93103e13f8d61b3bbca237e6337d82343ed31e17f5875e3885f8f49c8ba154: kube-system/kube-controller-manager-pause-396108/kube-controller-manager" id=1327f501-2537-4ed0-91e6-704b59f99435 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:09:22 pause-396108 crio[2069]: time="2025-11-24T04:09:22.091462658Z" level=info msg="Starting container: ec93103e13f8d61b3bbca237e6337d82343ed31e17f5875e3885f8f49c8ba154" id=1dcadf34-3b05-4b92-99de-59245effb443 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:09:22 pause-396108 crio[2069]: time="2025-11-24T04:09:22.092672851Z" level=info msg="Created container 510de710da2ba8ea819f9bd7d6008fc188c474758dc0982bc622288bdaf88a09: kube-system/kindnet-mfqdh/kindnet-cni" id=4ffca681-ae0d-46e2-b80b-4a4fe2345794 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:09:22 pause-396108 crio[2069]: time="2025-11-24T04:09:22.093403332Z" level=info msg="Starting container: 510de710da2ba8ea819f9bd7d6008fc188c474758dc0982bc622288bdaf88a09" id=5928429e-6d96-42b5-85ae-483912530f11 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:09:22 pause-396108 crio[2069]: time="2025-11-24T04:09:22.094667492Z" level=info msg="Started container" PID=2260 containerID=ec93103e13f8d61b3bbca237e6337d82343ed31e17f5875e3885f8f49c8ba154 description=kube-system/kube-controller-manager-pause-396108/kube-controller-manager id=1dcadf34-3b05-4b92-99de-59245effb443 name=/runtime.v1.RuntimeService/StartContainer sandboxID=69b0953b6c60580f2d938e26502ad893c931d1f9d06d9c058be5b1d5502cc9b7
	Nov 24 04:09:22 pause-396108 crio[2069]: time="2025-11-24T04:09:22.102719024Z" level=info msg="Started container" PID=2250 containerID=510de710da2ba8ea819f9bd7d6008fc188c474758dc0982bc622288bdaf88a09 description=kube-system/kindnet-mfqdh/kindnet-cni id=5928429e-6d96-42b5-85ae-483912530f11 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dc12e43147e2d03874b3d464ffb4bb201a715b3e38cc7dd82a51041f4db807fd
	Nov 24 04:09:32 pause-396108 crio[2069]: time="2025-11-24T04:09:32.471046385Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:09:32 pause-396108 crio[2069]: time="2025-11-24T04:09:32.482341339Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:09:32 pause-396108 crio[2069]: time="2025-11-24T04:09:32.482595021Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:09:32 pause-396108 crio[2069]: time="2025-11-24T04:09:32.482682555Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:09:32 pause-396108 crio[2069]: time="2025-11-24T04:09:32.488679432Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:09:32 pause-396108 crio[2069]: time="2025-11-24T04:09:32.488849511Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:09:32 pause-396108 crio[2069]: time="2025-11-24T04:09:32.489028894Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:09:32 pause-396108 crio[2069]: time="2025-11-24T04:09:32.498885738Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:09:32 pause-396108 crio[2069]: time="2025-11-24T04:09:32.499061716Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:09:32 pause-396108 crio[2069]: time="2025-11-24T04:09:32.499146796Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:09:32 pause-396108 crio[2069]: time="2025-11-24T04:09:32.504757157Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:09:32 pause-396108 crio[2069]: time="2025-11-24T04:09:32.504924159Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	510de710da2ba       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   22 seconds ago       Running             kindnet-cni               1                   dc12e43147e2d       kindnet-mfqdh                          kube-system
	ec93103e13f8d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   22 seconds ago       Running             kube-controller-manager   1                   69b0953b6c605       kube-controller-manager-pause-396108   kube-system
	1b824daf1dec3       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   22 seconds ago       Running             etcd                      1                   c53edc38cd4aa       etcd-pause-396108                      kube-system
	9d6051a594d15       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   22 seconds ago       Running             kube-apiserver            1                   3888b10d33971       kube-apiserver-pause-396108            kube-system
	5774c046abe90       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   22 seconds ago       Running             coredns                   1                   4840c1c8edc3b       coredns-66bc5c9577-xfr6t               kube-system
	e043bea1c710a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   22 seconds ago       Running             kube-scheduler            1                   26b0fa1160725       kube-scheduler-pause-396108            kube-system
	848b7a5b1960b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   22 seconds ago       Running             kube-proxy                1                   9b2c9f5fdaf22       kube-proxy-scjq4                       kube-system
	490bbd4ce436c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   34 seconds ago       Exited              coredns                   0                   4840c1c8edc3b       coredns-66bc5c9577-xfr6t               kube-system
	c8a48af8745d6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   dc12e43147e2d       kindnet-mfqdh                          kube-system
	11beefbf1a8a0       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   9b2c9f5fdaf22       kube-proxy-scjq4                       kube-system
	c7f699f1cc0ef       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   26b0fa1160725       kube-scheduler-pause-396108            kube-system
	d013da0147b89       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   3888b10d33971       kube-apiserver-pause-396108            kube-system
	5b712eafb8e88       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   69b0953b6c605       kube-controller-manager-pause-396108   kube-system
	94b696efd6b43       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   c53edc38cd4aa       etcd-pause-396108                      kube-system
	
	
	==> coredns [490bbd4ce436cb05c5881f746e8020778291193555fa5f32a43ae3598eddbd0d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60143 - 40329 "HINFO IN 6168204137401324256.8467796082807871234. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023768706s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [5774c046abe9043aadbfd6b9c4831cb9ed1b5f813e056e0057a80407ea6f6d2f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60637 - 31050 "HINFO IN 8734035371615620278.953058895943700928. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013299819s
	
	
	==> describe nodes <==
	Name:               pause-396108
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-396108
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=pause-396108
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T04_08_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 04:08:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-396108
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 04:09:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 04:09:09 +0000   Mon, 24 Nov 2025 04:08:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 04:09:09 +0000   Mon, 24 Nov 2025 04:08:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 04:09:09 +0000   Mon, 24 Nov 2025 04:08:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 04:09:09 +0000   Mon, 24 Nov 2025 04:09:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-396108
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 304a86241bf1bbb85bd31db5692386d7
	  System UUID:                5e1abedc-8af0-4c26-815b-98375e2397ff
	  Boot ID:                    e6ca431c-3a35-478f-87f6-f49cc4bc8a65
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-xfr6t                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     76s
	  kube-system                 etcd-pause-396108                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         82s
	  kube-system                 kindnet-mfqdh                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      76s
	  kube-system                 kube-apiserver-pause-396108             250m (12%)    0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-controller-manager-pause-396108    200m (10%)    0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-proxy-scjq4                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-scheduler-pause-396108             100m (5%)     0 (0%)      0 (0%)           0 (0%)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 75s                kube-proxy       
	  Normal   Starting                 17s                kube-proxy       
	  Warning  CgroupV1                 90s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  90s (x8 over 90s)  kubelet          Node pause-396108 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    90s (x8 over 90s)  kubelet          Node pause-396108 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     90s (x8 over 90s)  kubelet          Node pause-396108 status is now: NodeHasSufficientPID
	  Normal   Starting                 82s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 82s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  82s                kubelet          Node pause-396108 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    82s                kubelet          Node pause-396108 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     82s                kubelet          Node pause-396108 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           78s                node-controller  Node pause-396108 event: Registered Node pause-396108 in Controller
	  Normal   NodeReady                35s                kubelet          Node pause-396108 status is now: NodeReady
	  Normal   RegisteredNode           15s                node-controller  Node pause-396108 event: Registered Node pause-396108 in Controller
	
	
	==> dmesg <==
	[ +25.584783] overlayfs: idmapped layers are currently not supported
	[Nov24 03:42] overlayfs: idmapped layers are currently not supported
	[Nov24 03:43] overlayfs: idmapped layers are currently not supported
	[  +2.949427] overlayfs: idmapped layers are currently not supported
	[Nov24 03:44] overlayfs: idmapped layers are currently not supported
	[Nov24 03:45] overlayfs: idmapped layers are currently not supported
	[Nov24 03:46] overlayfs: idmapped layers are currently not supported
	[Nov24 03:51] overlayfs: idmapped layers are currently not supported
	[ +32.185990] overlayfs: idmapped layers are currently not supported
	[Nov24 03:52] overlayfs: idmapped layers are currently not supported
	[Nov24 03:54] overlayfs: idmapped layers are currently not supported
	[Nov24 03:55] overlayfs: idmapped layers are currently not supported
	[Nov24 03:56] overlayfs: idmapped layers are currently not supported
	[Nov24 03:57] overlayfs: idmapped layers are currently not supported
	[  +3.077077] overlayfs: idmapped layers are currently not supported
	[Nov24 03:58] overlayfs: idmapped layers are currently not supported
	[ +17.825126] overlayfs: idmapped layers are currently not supported
	[Nov24 03:59] overlayfs: idmapped layers are currently not supported
	[ +28.164324] overlayfs: idmapped layers are currently not supported
	[Nov24 04:01] overlayfs: idmapped layers are currently not supported
	[Nov24 04:02] overlayfs: idmapped layers are currently not supported
	[Nov24 04:04] overlayfs: idmapped layers are currently not supported
	[Nov24 04:05] overlayfs: idmapped layers are currently not supported
	[Nov24 04:06] overlayfs: idmapped layers are currently not supported
	[Nov24 04:08] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1b824daf1dec3f99a1d5c16e27c0cfc32418b57d8ec747134d3686b45610c51f] <==
	{"level":"warn","ts":"2025-11-24T04:09:24.413000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.419591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.442949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.467827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.493026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.500963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.539570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.576489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.581252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.608421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.631036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.658522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.672966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.690512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.705836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.726560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.771259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.776717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.810689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.832515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.853653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.898899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.911202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.937078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:25.018872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44918","server-name":"","error":"EOF"}
	
	
	==> etcd [94b696efd6b4382c567f3d5ef6f1fd9532fd20a48c4c2f66e053c1586cb5b17e] <==
	{"level":"warn","ts":"2025-11-24T04:08:18.842191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:08:18.856256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:08:18.880240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:08:18.908822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:08:18.932406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:08:18.951020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:08:19.046343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39712","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T04:09:14.435486Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-24T04:09:14.435569Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-396108","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-24T04:09:14.435724Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T04:09:14.597603Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T04:09:14.597718Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T04:09:14.597769Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-11-24T04:09:14.597934Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-24T04:09:14.597951Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-24T04:09:14.598941Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T04:09:14.599104Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T04:09:14.599153Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-24T04:09:14.599056Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T04:09:14.599224Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T04:09:14.599268Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T04:09:14.601284Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-24T04:09:14.601359Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T04:09:14.601412Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-24T04:09:14.601421Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-396108","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 04:09:45 up  2:51,  0 user,  load average: 2.21, 2.55, 2.27
	Linux pause-396108 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [510de710da2ba8ea819f9bd7d6008fc188c474758dc0982bc622288bdaf88a09] <==
	I1124 04:09:22.252042       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 04:09:22.252425       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 04:09:22.256026       1 main.go:148] setting mtu 1500 for CNI 
	I1124 04:09:22.256120       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 04:09:22.256163       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T04:09:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 04:09:22.477751       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 04:09:22.477782       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 04:09:22.477792       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 04:09:22.477912       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 04:09:26.678398       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 04:09:26.678433       1 metrics.go:72] Registering metrics
	I1124 04:09:26.678578       1 controller.go:711] "Syncing nftables rules"
	I1124 04:09:32.470553       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 04:09:32.470704       1 main.go:301] handling current node
	I1124 04:09:42.464462       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 04:09:42.464531       1 main.go:301] handling current node
	
	
	==> kindnet [c8a48af8745d6e7ed60d98860bbdd720ffd910fb5f4ca540179f0c1ded57c194] <==
	I1124 04:08:28.727531       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 04:08:28.727915       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 04:08:28.728099       1 main.go:148] setting mtu 1500 for CNI 
	I1124 04:08:28.728112       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 04:08:28.728142       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T04:08:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 04:08:29.015692       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 04:08:29.015721       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 04:08:29.015731       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 04:08:29.016545       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 04:08:59.016097       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 04:08:59.016230       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1124 04:08:59.016315       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 04:08:59.017595       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1124 04:09:00.015979       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 04:09:00.016004       1 metrics.go:72] Registering metrics
	I1124 04:09:00.016091       1 controller.go:711] "Syncing nftables rules"
	I1124 04:09:09.022603       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 04:09:09.022653       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9d6051a594d1510f507c94cbd06679c59fda5ce1b35b721c4b945044a6c20cff] <==
	I1124 04:09:26.640698       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 04:09:26.641086       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1124 04:09:26.641144       1 policy_source.go:240] refreshing policies
	I1124 04:09:26.641923       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 04:09:26.641997       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 04:09:26.642108       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 04:09:26.642347       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 04:09:26.642395       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1124 04:09:26.642447       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 04:09:26.643980       1 aggregator.go:171] initial CRD sync complete...
	I1124 04:09:26.644398       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 04:09:26.644446       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 04:09:26.644477       1 cache.go:39] Caches are synced for autoregister controller
	I1124 04:09:26.645243       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 04:09:26.646076       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 04:09:26.646161       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 04:09:26.648235       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 04:09:26.657675       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1124 04:09:26.671349       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 04:09:27.276529       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 04:09:27.718043       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 04:09:29.125484       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 04:09:29.160641       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 04:09:29.410542       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 04:09:29.464073       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [d013da0147b8934c86c12720efd80ec59f61d17784274267bb11bc96acac4c94] <==
	W1124 04:09:14.472042       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.472179       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.472320       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.472467       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.472597       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.473013       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.473186       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.473325       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.473554       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.473888       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.474021       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.484306       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.484530       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.484685       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.484843       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.484990       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.485121       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.485387       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.485512       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.485650       1 logging.go:55] [core] [Channel #26 SubChannel #28]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.485720       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.485782       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.485859       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.485936       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.486006       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [5b712eafb8e889f1f03dcbca8e8cff4e20805686a79699de0f7ac21affd0b9f9] <==
	I1124 04:08:26.928195       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 04:08:26.929357       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 04:08:26.933337       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 04:08:26.933372       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 04:08:26.933423       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 04:08:26.933451       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 04:08:26.933456       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 04:08:26.933461       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 04:08:26.939043       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 04:08:26.939815       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 04:08:26.945544       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 04:08:26.957810       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 04:08:26.961490       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 04:08:26.964376       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 04:08:26.964397       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 04:08:26.964405       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 04:08:26.966515       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 04:08:26.966562       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 04:08:26.966714       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 04:08:26.966735       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 04:08:26.966934       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 04:08:26.967625       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 04:08:26.990946       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-396108" podCIDRs=["10.244.0.0/24"]
	I1124 04:08:26.991071       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 04:09:11.966539       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [ec93103e13f8d61b3bbca237e6337d82343ed31e17f5875e3885f8f49c8ba154] <==
	I1124 04:09:29.085773       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 04:09:29.086662       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 04:09:29.086667       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 04:09:29.094538       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 04:09:29.094634       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 04:09:29.095048       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 04:09:29.095097       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 04:09:29.095173       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 04:09:29.095312       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 04:09:29.098624       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 04:09:29.103308       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 04:09:29.103409       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 04:09:29.113060       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 04:09:29.113132       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 04:09:29.113204       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 04:09:29.113240       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 04:09:29.113252       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 04:09:29.113260       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 04:09:29.113333       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 04:09:29.113404       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 04:09:29.114290       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 04:09:29.114325       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 04:09:29.114340       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 04:09:29.139817       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 04:09:29.145299       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	
	
	==> kube-proxy [11beefbf1a8a00864fe3381844d37aedc1f9ce9831b394bb7bdc1ded4239d89e] <==
	I1124 04:08:28.651477       1 server_linux.go:53] "Using iptables proxy"
	I1124 04:08:28.748416       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 04:08:28.848808       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 04:08:28.848844       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 04:08:28.848932       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 04:08:28.932206       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 04:08:28.932255       1 server_linux.go:132] "Using iptables Proxier"
	I1124 04:08:28.945116       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 04:08:28.945450       1 server.go:527] "Version info" version="v1.34.1"
	I1124 04:08:28.945462       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:08:28.947472       1 config.go:200] "Starting service config controller"
	I1124 04:08:28.947487       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 04:08:28.947503       1 config.go:106] "Starting endpoint slice config controller"
	I1124 04:08:28.947510       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 04:08:28.947532       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 04:08:28.947536       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 04:08:28.948150       1 config.go:309] "Starting node config controller"
	I1124 04:08:28.948157       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 04:08:28.948163       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 04:08:29.047770       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 04:08:29.047814       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 04:08:29.047874       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [848b7a5b1960bf771106d3ade5f36482eb5247f3fdffae808cd1b74ec8b48cb5] <==
	I1124 04:09:24.868630       1 server_linux.go:53] "Using iptables proxy"
	I1124 04:09:26.704366       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 04:09:26.814527       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 04:09:26.814657       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 04:09:26.815712       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 04:09:26.928372       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 04:09:26.928499       1 server_linux.go:132] "Using iptables Proxier"
	I1124 04:09:26.937615       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 04:09:26.938003       1 server.go:527] "Version info" version="v1.34.1"
	I1124 04:09:26.938184       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:09:26.939503       1 config.go:200] "Starting service config controller"
	I1124 04:09:26.939557       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 04:09:26.939600       1 config.go:106] "Starting endpoint slice config controller"
	I1124 04:09:26.939627       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 04:09:26.939663       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 04:09:26.939689       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 04:09:26.940384       1 config.go:309] "Starting node config controller"
	I1124 04:09:26.943138       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 04:09:26.943192       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 04:09:27.040376       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 04:09:27.040488       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 04:09:27.040513       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c7f699f1cc0ef484fe9224d86d2fb5cdd924b5f1e89ed3e12a85d92c72cc378e] <==
	E1124 04:08:20.357634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 04:08:20.357696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 04:08:20.357748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 04:08:20.357822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 04:08:20.357879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 04:08:20.358105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 04:08:20.358225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 04:08:20.358271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 04:08:20.358287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 04:08:20.358302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 04:08:20.358360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 04:08:20.358414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 04:08:21.159907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 04:08:21.177663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 04:08:21.195203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 04:08:21.225886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 04:08:21.226598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 04:08:21.321629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 04:08:21.345920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1124 04:08:21.916038       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 04:09:14.463066       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1124 04:09:14.464099       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1124 04:09:14.471263       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1124 04:09:14.471360       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1124 04:09:14.473403       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e043bea1c710a5ce21da1af2f69f48cc7f408d94c02ad317ef742420e6047668] <==
	I1124 04:09:25.285813       1 serving.go:386] Generated self-signed cert in-memory
	I1124 04:09:27.167194       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 04:09:27.167319       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:09:27.179918       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 04:09:27.180146       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 04:09:27.180200       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 04:09:27.180246       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 04:09:27.182308       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 04:09:27.194519       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 04:09:27.190516       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 04:09:27.194780       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 04:09:27.280560       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1124 04:09:27.295293       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 04:09:27.295403       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 04:09:21 pause-396108 kubelet[1304]: I1124 04:09:21.870570    1304 scope.go:117] "RemoveContainer" containerID="5b712eafb8e889f1f03dcbca8e8cff4e20805686a79699de0f7ac21affd0b9f9"
	Nov 24 04:09:21 pause-396108 kubelet[1304]: E1124 04:09:21.871179    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-xfr6t\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fd71cd99-c8ae-4289-91db-9e0d7fe80820" pod="kube-system/coredns-66bc5c9577-xfr6t"
	Nov 24 04:09:21 pause-396108 kubelet[1304]: E1124 04:09:21.871463    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-396108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="645c5104c13988790f9502b276745a8a" pod="kube-system/etcd-pause-396108"
	Nov 24 04:09:21 pause-396108 kubelet[1304]: E1124 04:09:21.871714    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-396108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="01d4d8b7353208f37f9c78a2f5d85171" pod="kube-system/kube-scheduler-pause-396108"
	Nov 24 04:09:21 pause-396108 kubelet[1304]: E1124 04:09:21.871956    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-396108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="479c0426591e889826070894e4ec2fe6" pod="kube-system/kube-apiserver-pause-396108"
	Nov 24 04:09:21 pause-396108 kubelet[1304]: E1124 04:09:21.872206    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-396108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6f2a5d5024437d0bdcaef8a7380af89f" pod="kube-system/kube-controller-manager-pause-396108"
	Nov 24 04:09:21 pause-396108 kubelet[1304]: E1124 04:09:21.872453    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-scjq4\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="55cb7252-6c8b-4499-8353-ffca1a4f06d1" pod="kube-system/kube-proxy-scjq4"
	Nov 24 04:09:21 pause-396108 kubelet[1304]: I1124 04:09:21.929656    1304 scope.go:117] "RemoveContainer" containerID="c8a48af8745d6e7ed60d98860bbdd720ffd910fb5f4ca540179f0c1ded57c194"
	Nov 24 04:09:21 pause-396108 kubelet[1304]: E1124 04:09:21.930165    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-396108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="645c5104c13988790f9502b276745a8a" pod="kube-system/etcd-pause-396108"
	Nov 24 04:09:21 pause-396108 kubelet[1304]: E1124 04:09:21.930401    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-396108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="01d4d8b7353208f37f9c78a2f5d85171" pod="kube-system/kube-scheduler-pause-396108"
	Nov 24 04:09:21 pause-396108 kubelet[1304]: E1124 04:09:21.931024    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-396108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="479c0426591e889826070894e4ec2fe6" pod="kube-system/kube-apiserver-pause-396108"
	Nov 24 04:09:21 pause-396108 kubelet[1304]: E1124 04:09:21.931262    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-396108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6f2a5d5024437d0bdcaef8a7380af89f" pod="kube-system/kube-controller-manager-pause-396108"
	Nov 24 04:09:21 pause-396108 kubelet[1304]: E1124 04:09:21.931444    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-scjq4\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="55cb7252-6c8b-4499-8353-ffca1a4f06d1" pod="kube-system/kube-proxy-scjq4"
	Nov 24 04:09:21 pause-396108 kubelet[1304]: E1124 04:09:21.931647    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-mfqdh\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="bb8e0be7-54b6-4486-9171-829f7caa1732" pod="kube-system/kindnet-mfqdh"
	Nov 24 04:09:21 pause-396108 kubelet[1304]: E1124 04:09:21.931797    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-xfr6t\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fd71cd99-c8ae-4289-91db-9e0d7fe80820" pod="kube-system/coredns-66bc5c9577-xfr6t"
	Nov 24 04:09:22 pause-396108 kubelet[1304]: W1124 04:09:22.833460    1304 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 24 04:09:26 pause-396108 kubelet[1304]: E1124 04:09:26.456857    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-mfqdh\" is forbidden: User \"system:node:pause-396108\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-396108' and this object" podUID="bb8e0be7-54b6-4486-9171-829f7caa1732" pod="kube-system/kindnet-mfqdh"
	Nov 24 04:09:26 pause-396108 kubelet[1304]: E1124 04:09:26.457122    1304 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-396108\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-396108' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 24 04:09:26 pause-396108 kubelet[1304]: E1124 04:09:26.457145    1304 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-396108\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-396108' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 24 04:09:26 pause-396108 kubelet[1304]: E1124 04:09:26.457176    1304 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-396108\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-396108' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 24 04:09:26 pause-396108 kubelet[1304]: E1124 04:09:26.523522    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-xfr6t\" is forbidden: User \"system:node:pause-396108\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-396108' and this object" podUID="fd71cd99-c8ae-4289-91db-9e0d7fe80820" pod="kube-system/coredns-66bc5c9577-xfr6t"
	Nov 24 04:09:32 pause-396108 kubelet[1304]: W1124 04:09:32.855465    1304 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 24 04:09:42 pause-396108 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 04:09:42 pause-396108 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 04:09:42 pause-396108 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-396108 -n pause-396108
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-396108 -n pause-396108: exit status 2 (363.288053ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-396108 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-396108
helpers_test.go:243: (dbg) docker inspect pause-396108:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "057fcb24b84cefb9aedb4f6424d932811c2f4a2fac22933f0d90412e3f492f9d",
	        "Created": "2025-11-24T04:07:55.891399667Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 447988,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T04:07:55.95087115Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fbb44bc62521f331457dff002aaa5e1e27856f9e53853b3b3ee62969be454028",
	        "ResolvConfPath": "/var/lib/docker/containers/057fcb24b84cefb9aedb4f6424d932811c2f4a2fac22933f0d90412e3f492f9d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/057fcb24b84cefb9aedb4f6424d932811c2f4a2fac22933f0d90412e3f492f9d/hostname",
	        "HostsPath": "/var/lib/docker/containers/057fcb24b84cefb9aedb4f6424d932811c2f4a2fac22933f0d90412e3f492f9d/hosts",
	        "LogPath": "/var/lib/docker/containers/057fcb24b84cefb9aedb4f6424d932811c2f4a2fac22933f0d90412e3f492f9d/057fcb24b84cefb9aedb4f6424d932811c2f4a2fac22933f0d90412e3f492f9d-json.log",
	        "Name": "/pause-396108",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-396108:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-396108",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "057fcb24b84cefb9aedb4f6424d932811c2f4a2fac22933f0d90412e3f492f9d",
	                "LowerDir": "/var/lib/docker/overlay2/a8afb836e55c54139d976e57b05e90be0e57acec71a29ada2352540504372b50-init/diff:/var/lib/docker/overlay2/d0aa28be488ed1454a92ae6ce1d851cbc3e880fd8a322d63788b1835a346ec13/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a8afb836e55c54139d976e57b05e90be0e57acec71a29ada2352540504372b50/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a8afb836e55c54139d976e57b05e90be0e57acec71a29ada2352540504372b50/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a8afb836e55c54139d976e57b05e90be0e57acec71a29ada2352540504372b50/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-396108",
	                "Source": "/var/lib/docker/volumes/pause-396108/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-396108",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-396108",
	                "name.minikube.sigs.k8s.io": "pause-396108",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b5b53415c9727420745f4408e8c207f52a72d53b915f23812bee1eebe926b61d",
	            "SandboxKey": "/var/run/docker/netns/b5b53415c972",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33396"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33397"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33400"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33398"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33399"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-396108": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:fc:d4:91:90:f2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1bc9e483a8917fcc6ac0fd77591b3183468ef19521a99971e75d95e0eb70d15c",
	                    "EndpointID": "dd7b7c0070075c1e5cf0961d7679a85a877ad50a78691d1311ce0cdbd5af0635",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-396108",
	                        "057fcb24b84c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-396108 -n pause-396108
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-396108 -n pause-396108: exit status 2 (351.124646ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-396108 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-396108 logs -n 25: (1.367679935s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-314310 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-314310       │ jenkins │ v1.37.0 │ 24 Nov 25 04:03 UTC │ 24 Nov 25 04:04 UTC │
	│ start   │ -p missing-upgrade-935894 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-935894    │ jenkins │ v1.32.0 │ 24 Nov 25 04:03 UTC │ 24 Nov 25 04:04 UTC │
	│ start   │ -p NoKubernetes-314310 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-314310       │ jenkins │ v1.37.0 │ 24 Nov 25 04:04 UTC │ 24 Nov 25 04:04 UTC │
	│ delete  │ -p NoKubernetes-314310                                                                                                                   │ NoKubernetes-314310       │ jenkins │ v1.37.0 │ 24 Nov 25 04:04 UTC │ 24 Nov 25 04:04 UTC │
	│ start   │ -p NoKubernetes-314310 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-314310       │ jenkins │ v1.37.0 │ 24 Nov 25 04:04 UTC │ 24 Nov 25 04:04 UTC │
	│ start   │ -p missing-upgrade-935894 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-935894    │ jenkins │ v1.37.0 │ 24 Nov 25 04:04 UTC │ 24 Nov 25 04:05 UTC │
	│ ssh     │ -p NoKubernetes-314310 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-314310       │ jenkins │ v1.37.0 │ 24 Nov 25 04:04 UTC │                     │
	│ stop    │ -p NoKubernetes-314310                                                                                                                   │ NoKubernetes-314310       │ jenkins │ v1.37.0 │ 24 Nov 25 04:04 UTC │ 24 Nov 25 04:04 UTC │
	│ start   │ -p NoKubernetes-314310 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-314310       │ jenkins │ v1.37.0 │ 24 Nov 25 04:04 UTC │ 24 Nov 25 04:05 UTC │
	│ ssh     │ -p NoKubernetes-314310 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-314310       │ jenkins │ v1.37.0 │ 24 Nov 25 04:05 UTC │                     │
	│ delete  │ -p NoKubernetes-314310                                                                                                                   │ NoKubernetes-314310       │ jenkins │ v1.37.0 │ 24 Nov 25 04:05 UTC │ 24 Nov 25 04:05 UTC │
	│ start   │ -p kubernetes-upgrade-207884 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-207884 │ jenkins │ v1.37.0 │ 24 Nov 25 04:05 UTC │ 24 Nov 25 04:05 UTC │
	│ delete  │ -p missing-upgrade-935894                                                                                                                │ missing-upgrade-935894    │ jenkins │ v1.37.0 │ 24 Nov 25 04:05 UTC │ 24 Nov 25 04:05 UTC │
	│ stop    │ -p kubernetes-upgrade-207884                                                                                                             │ kubernetes-upgrade-207884 │ jenkins │ v1.37.0 │ 24 Nov 25 04:05 UTC │ 24 Nov 25 04:05 UTC │
	│ start   │ -p stopped-upgrade-191757 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-191757    │ jenkins │ v1.32.0 │ 24 Nov 25 04:05 UTC │ 24 Nov 25 04:06 UTC │
	│ start   │ -p kubernetes-upgrade-207884 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-207884 │ jenkins │ v1.37.0 │ 24 Nov 25 04:05 UTC │                     │
	│ stop    │ stopped-upgrade-191757 stop                                                                                                              │ stopped-upgrade-191757    │ jenkins │ v1.32.0 │ 24 Nov 25 04:06 UTC │ 24 Nov 25 04:06 UTC │
	│ start   │ -p stopped-upgrade-191757 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-191757    │ jenkins │ v1.37.0 │ 24 Nov 25 04:06 UTC │ 24 Nov 25 04:06 UTC │
	│ delete  │ -p stopped-upgrade-191757                                                                                                                │ stopped-upgrade-191757    │ jenkins │ v1.37.0 │ 24 Nov 25 04:06 UTC │ 24 Nov 25 04:06 UTC │
	│ start   │ -p running-upgrade-352504 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-352504    │ jenkins │ v1.32.0 │ 24 Nov 25 04:06 UTC │ 24 Nov 25 04:07 UTC │
	│ start   │ -p running-upgrade-352504 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-352504    │ jenkins │ v1.37.0 │ 24 Nov 25 04:07 UTC │ 24 Nov 25 04:07 UTC │
	│ delete  │ -p running-upgrade-352504                                                                                                                │ running-upgrade-352504    │ jenkins │ v1.37.0 │ 24 Nov 25 04:07 UTC │ 24 Nov 25 04:07 UTC │
	│ start   │ -p pause-396108 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-396108              │ jenkins │ v1.37.0 │ 24 Nov 25 04:07 UTC │ 24 Nov 25 04:09 UTC │
	│ start   │ -p pause-396108 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-396108              │ jenkins │ v1.37.0 │ 24 Nov 25 04:09 UTC │ 24 Nov 25 04:09 UTC │
	│ pause   │ -p pause-396108 --alsologtostderr -v=5                                                                                                   │ pause-396108              │ jenkins │ v1.37.0 │ 24 Nov 25 04:09 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 04:09:12
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 04:09:12.750350  451987 out.go:360] Setting OutFile to fd 1 ...
	I1124 04:09:12.750617  451987 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:09:12.750631  451987 out.go:374] Setting ErrFile to fd 2...
	I1124 04:09:12.750637  451987 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:09:12.750902  451987 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 04:09:12.751309  451987 out.go:368] Setting JSON to false
	I1124 04:09:12.752302  451987 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10282,"bootTime":1763947071,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 04:09:12.752378  451987 start.go:143] virtualization:  
	I1124 04:09:12.755378  451987 out.go:179] * [pause-396108] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 04:09:12.759439  451987 notify.go:221] Checking for updates...
	I1124 04:09:12.763205  451987 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 04:09:12.766081  451987 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 04:09:12.768993  451987 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:09:12.771907  451987 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	I1124 04:09:12.774775  451987 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 04:09:12.777748  451987 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 04:09:12.781485  451987 config.go:182] Loaded profile config "pause-396108": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:09:12.782101  451987 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 04:09:12.823554  451987 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 04:09:12.823677  451987 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:09:12.883750  451987 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-24 04:09:12.872982093 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:09:12.883889  451987 docker.go:319] overlay module found
	I1124 04:09:12.887347  451987 out.go:179] * Using the docker driver based on existing profile
	I1124 04:09:12.890241  451987 start.go:309] selected driver: docker
	I1124 04:09:12.890268  451987 start.go:927] validating driver "docker" against &{Name:pause-396108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-396108 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:09:12.890404  451987 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 04:09:12.890549  451987 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:09:12.951141  451987 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-24 04:09:12.941205945 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:09:12.951536  451987 cni.go:84] Creating CNI manager for ""
	I1124 04:09:12.951605  451987 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:09:12.951663  451987 start.go:353] cluster config:
	{Name:pause-396108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-396108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:09:12.954856  451987 out.go:179] * Starting "pause-396108" primary control-plane node in "pause-396108" cluster
	I1124 04:09:12.957641  451987 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 04:09:12.960702  451987 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 04:09:12.963567  451987 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:09:12.963636  451987 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 04:09:12.963670  451987 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 04:09:12.963683  451987 cache.go:65] Caching tarball of preloaded images
	I1124 04:09:12.963778  451987 preload.go:238] Found /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 04:09:12.963789  451987 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 04:09:12.963920  451987 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/pause-396108/config.json ...
	I1124 04:09:12.990019  451987 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 04:09:12.990043  451987 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 04:09:12.990062  451987 cache.go:243] Successfully downloaded all kic artifacts
	I1124 04:09:12.990090  451987 start.go:360] acquireMachinesLock for pause-396108: {Name:mk45a889be94844acd02a961e5f42591cb13ad56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 04:09:12.990160  451987 start.go:364] duration metric: took 43.365µs to acquireMachinesLock for "pause-396108"
	I1124 04:09:12.990183  451987 start.go:96] Skipping create...Using existing machine configuration
	I1124 04:09:12.990191  451987 fix.go:54] fixHost starting: 
	I1124 04:09:12.990504  451987 cli_runner.go:164] Run: docker container inspect pause-396108 --format={{.State.Status}}
	I1124 04:09:13.010803  451987 fix.go:112] recreateIfNeeded on pause-396108: state=Running err=<nil>
	W1124 04:09:13.010848  451987 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 04:09:13.014063  451987 out.go:252] * Updating the running docker "pause-396108" container ...
	I1124 04:09:13.014110  451987 machine.go:94] provisionDockerMachine start ...
	I1124 04:09:13.014194  451987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-396108
	I1124 04:09:13.034947  451987 main.go:143] libmachine: Using SSH client type: native
	I1124 04:09:13.035311  451987 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33396 <nil> <nil>}
	I1124 04:09:13.035326  451987 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 04:09:13.181968  451987 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-396108
	
	I1124 04:09:13.181995  451987 ubuntu.go:182] provisioning hostname "pause-396108"
	I1124 04:09:13.182068  451987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-396108
	I1124 04:09:13.205774  451987 main.go:143] libmachine: Using SSH client type: native
	I1124 04:09:13.206090  451987 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33396 <nil> <nil>}
	I1124 04:09:13.206106  451987 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-396108 && echo "pause-396108" | sudo tee /etc/hostname
	I1124 04:09:13.363087  451987 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-396108
	
	I1124 04:09:13.363212  451987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-396108
	I1124 04:09:13.381386  451987 main.go:143] libmachine: Using SSH client type: native
	I1124 04:09:13.381707  451987 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33396 <nil> <nil>}
	I1124 04:09:13.381730  451987 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-396108' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-396108/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-396108' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 04:09:13.540893  451987 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 04:09:13.540922  451987 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-289526/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-289526/.minikube}
	I1124 04:09:13.540981  451987 ubuntu.go:190] setting up certificates
	I1124 04:09:13.540992  451987 provision.go:84] configureAuth start
	I1124 04:09:13.541085  451987 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-396108
	I1124 04:09:13.569326  451987 provision.go:143] copyHostCerts
	I1124 04:09:13.569392  451987 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem, removing ...
	I1124 04:09:13.569406  451987 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem
	I1124 04:09:13.569488  451987 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem (1082 bytes)
	I1124 04:09:13.569614  451987 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem, removing ...
	I1124 04:09:13.569623  451987 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem
	I1124 04:09:13.569653  451987 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem (1123 bytes)
	I1124 04:09:13.569704  451987 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem, removing ...
	I1124 04:09:13.569708  451987 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem
	I1124 04:09:13.569736  451987 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem (1675 bytes)
	I1124 04:09:13.569791  451987 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem org=jenkins.pause-396108 san=[127.0.0.1 192.168.85.2 localhost minikube pause-396108]
	I1124 04:09:14.006301  451987 provision.go:177] copyRemoteCerts
	I1124 04:09:14.006411  451987 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 04:09:14.006493  451987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-396108
	I1124 04:09:14.028005  451987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33396 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/pause-396108/id_rsa Username:docker}
	I1124 04:09:14.144256  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 04:09:14.164482  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1124 04:09:14.184367  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 04:09:14.206927  451987 provision.go:87] duration metric: took 665.908716ms to configureAuth
	I1124 04:09:14.206956  451987 ubuntu.go:206] setting minikube options for container-runtime
	I1124 04:09:14.207230  451987 config.go:182] Loaded profile config "pause-396108": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:09:14.207402  451987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-396108
	I1124 04:09:14.227218  451987 main.go:143] libmachine: Using SSH client type: native
	I1124 04:09:14.227558  451987 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33396 <nil> <nil>}
	I1124 04:09:14.227580  451987 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 04:09:19.650538  451987 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 04:09:19.650565  451987 machine.go:97] duration metric: took 6.636446475s to provisionDockerMachine
	I1124 04:09:19.650577  451987 start.go:293] postStartSetup for "pause-396108" (driver="docker")
	I1124 04:09:19.650588  451987 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 04:09:19.650671  451987 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 04:09:19.650719  451987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-396108
	I1124 04:09:19.668078  451987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33396 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/pause-396108/id_rsa Username:docker}
	I1124 04:09:19.774557  451987 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 04:09:19.777828  451987 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 04:09:19.777856  451987 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 04:09:19.777867  451987 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/addons for local assets ...
	I1124 04:09:19.777919  451987 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/files for local assets ...
	I1124 04:09:19.777998  451987 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem -> 2913892.pem in /etc/ssl/certs
	I1124 04:09:19.778099  451987 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 04:09:19.785799  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:09:19.803579  451987 start.go:296] duration metric: took 152.986539ms for postStartSetup
	I1124 04:09:19.803679  451987 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 04:09:19.803744  451987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-396108
	I1124 04:09:19.820378  451987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33396 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/pause-396108/id_rsa Username:docker}
	I1124 04:09:19.919625  451987 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 04:09:19.924487  451987 fix.go:56] duration metric: took 6.934286176s for fixHost
	I1124 04:09:19.924515  451987 start.go:83] releasing machines lock for "pause-396108", held for 6.934343424s
	I1124 04:09:19.924583  451987 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-396108
	I1124 04:09:19.939840  451987 ssh_runner.go:195] Run: cat /version.json
	I1124 04:09:19.939868  451987 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 04:09:19.939917  451987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-396108
	I1124 04:09:19.939922  451987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-396108
	I1124 04:09:19.956671  451987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33396 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/pause-396108/id_rsa Username:docker}
	I1124 04:09:19.966642  451987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33396 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/pause-396108/id_rsa Username:docker}
	I1124 04:09:20.161296  451987 ssh_runner.go:195] Run: systemctl --version
	I1124 04:09:20.168131  451987 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 04:09:20.211941  451987 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 04:09:20.216129  451987 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 04:09:20.216221  451987 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 04:09:20.224585  451987 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 04:09:20.224661  451987 start.go:496] detecting cgroup driver to use...
	I1124 04:09:20.224712  451987 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 04:09:20.224763  451987 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 04:09:20.239105  451987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 04:09:20.252318  451987 docker.go:218] disabling cri-docker service (if available) ...
	I1124 04:09:20.252411  451987 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 04:09:20.268162  451987 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 04:09:20.281131  451987 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 04:09:20.421944  451987 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 04:09:20.587952  451987 docker.go:234] disabling docker service ...
	I1124 04:09:20.588069  451987 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 04:09:20.613735  451987 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 04:09:20.634621  451987 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 04:09:20.832850  451987 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 04:09:21.032053  451987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 04:09:21.048361  451987 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 04:09:21.067442  451987 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 04:09:21.067561  451987 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:09:21.077505  451987 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 04:09:21.077684  451987 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:09:21.087529  451987 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:09:21.097191  451987 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:09:21.107367  451987 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 04:09:21.116407  451987 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:09:21.128083  451987 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:09:21.136921  451987 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:09:21.145813  451987 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 04:09:21.153095  451987 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 04:09:21.160358  451987 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:09:21.307643  451987 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 04:09:21.519902  451987 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 04:09:21.520009  451987 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 04:09:21.524081  451987 start.go:564] Will wait 60s for crictl version
	I1124 04:09:21.524148  451987 ssh_runner.go:195] Run: which crictl
	I1124 04:09:21.528037  451987 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 04:09:21.561138  451987 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 04:09:21.561256  451987 ssh_runner.go:195] Run: crio --version
	I1124 04:09:21.591751  451987 ssh_runner.go:195] Run: crio --version
	I1124 04:09:21.629832  451987 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 04:09:21.632940  451987 cli_runner.go:164] Run: docker network inspect pause-396108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 04:09:21.648157  451987 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 04:09:21.652060  451987 kubeadm.go:884] updating cluster {Name:pause-396108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-396108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 04:09:21.652203  451987 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:09:21.652254  451987 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:09:21.689646  451987 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:09:21.689674  451987 crio.go:433] Images already preloaded, skipping extraction
	I1124 04:09:21.689730  451987 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:09:21.718864  451987 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:09:21.718932  451987 cache_images.go:86] Images are preloaded, skipping loading
	I1124 04:09:21.718947  451987 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1124 04:09:21.719046  451987 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-396108 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-396108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 04:09:21.719130  451987 ssh_runner.go:195] Run: crio config
	I1124 04:09:21.777126  451987 cni.go:84] Creating CNI manager for ""
	I1124 04:09:21.777154  451987 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:09:21.777177  451987 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 04:09:21.777219  451987 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-396108 NodeName:pause-396108 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 04:09:21.777385  451987 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-396108"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 04:09:21.777463  451987 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 04:09:21.784833  451987 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 04:09:21.784956  451987 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 04:09:21.792387  451987 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1124 04:09:21.815953  451987 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 04:09:21.831557  451987 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1124 04:09:21.855404  451987 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 04:09:21.862966  451987 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:09:22.169485  451987 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:09:22.187067  451987 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/pause-396108 for IP: 192.168.85.2
	I1124 04:09:22.187084  451987 certs.go:195] generating shared ca certs ...
	I1124 04:09:22.187100  451987 certs.go:227] acquiring lock for ca certs: {Name:mk13d1a72b5c7901cf99d88e558f62c6b8512807 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:09:22.187242  451987 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key
	I1124 04:09:22.187283  451987 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key
	I1124 04:09:22.187290  451987 certs.go:257] generating profile certs ...
	I1124 04:09:22.187372  451987 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/pause-396108/client.key
	I1124 04:09:22.187444  451987 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/pause-396108/apiserver.key.0991ffb6
	I1124 04:09:22.187485  451987 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/pause-396108/proxy-client.key
	I1124 04:09:22.187598  451987 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem (1338 bytes)
	W1124 04:09:22.187628  451987 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389_empty.pem, impossibly tiny 0 bytes
	I1124 04:09:22.187637  451987 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 04:09:22.187662  451987 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem (1082 bytes)
	I1124 04:09:22.187685  451987 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem (1123 bytes)
	I1124 04:09:22.187707  451987 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem (1675 bytes)
	I1124 04:09:22.187754  451987 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:09:22.188414  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 04:09:22.214114  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 04:09:22.244496  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 04:09:22.264271  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 04:09:22.286807  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/pause-396108/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1124 04:09:22.309966  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/pause-396108/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 04:09:22.341529  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/pause-396108/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 04:09:22.370650  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/pause-396108/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1124 04:09:22.399461  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /usr/share/ca-certificates/2913892.pem (1708 bytes)
	I1124 04:09:22.433686  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 04:09:22.467149  451987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem --> /usr/share/ca-certificates/291389.pem (1338 bytes)
	I1124 04:09:22.492716  451987 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 04:09:22.517092  451987 ssh_runner.go:195] Run: openssl version
	I1124 04:09:22.531578  451987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 04:09:22.543884  451987 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:09:22.554862  451987 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 03:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:09:22.554978  451987 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:09:22.607494  451987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 04:09:22.616099  451987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291389.pem && ln -fs /usr/share/ca-certificates/291389.pem /etc/ssl/certs/291389.pem"
	I1124 04:09:22.628603  451987 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291389.pem
	I1124 04:09:22.637879  451987 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 03:20 /usr/share/ca-certificates/291389.pem
	I1124 04:09:22.638001  451987 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291389.pem
	I1124 04:09:22.687076  451987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291389.pem /etc/ssl/certs/51391683.0"
	I1124 04:09:22.695583  451987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2913892.pem && ln -fs /usr/share/ca-certificates/2913892.pem /etc/ssl/certs/2913892.pem"
	I1124 04:09:22.704403  451987 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2913892.pem
	I1124 04:09:22.709443  451987 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 03:20 /usr/share/ca-certificates/2913892.pem
	I1124 04:09:22.709597  451987 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2913892.pem
	I1124 04:09:22.797497  451987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2913892.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 04:09:22.805843  451987 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 04:09:22.818610  451987 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 04:09:22.871698  451987 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 04:09:22.915141  451987 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 04:09:22.978390  451987 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 04:09:23.024028  451987 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 04:09:23.080054  451987 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 04:09:23.131338  451987 kubeadm.go:401] StartCluster: {Name:pause-396108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-396108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:09:23.131508  451987 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 04:09:23.131616  451987 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 04:09:23.190634  451987 cri.go:89] found id: "510de710da2ba8ea819f9bd7d6008fc188c474758dc0982bc622288bdaf88a09"
	I1124 04:09:23.190708  451987 cri.go:89] found id: "ec93103e13f8d61b3bbca237e6337d82343ed31e17f5875e3885f8f49c8ba154"
	I1124 04:09:23.190727  451987 cri.go:89] found id: "1b824daf1dec3f99a1d5c16e27c0cfc32418b57d8ec747134d3686b45610c51f"
	I1124 04:09:23.190749  451987 cri.go:89] found id: "9d6051a594d1510f507c94cbd06679c59fda5ce1b35b721c4b945044a6c20cff"
	I1124 04:09:23.190783  451987 cri.go:89] found id: "5774c046abe9043aadbfd6b9c4831cb9ed1b5f813e056e0057a80407ea6f6d2f"
	I1124 04:09:23.190808  451987 cri.go:89] found id: "e043bea1c710a5ce21da1af2f69f48cc7f408d94c02ad317ef742420e6047668"
	I1124 04:09:23.190829  451987 cri.go:89] found id: "848b7a5b1960bf771106d3ade5f36482eb5247f3fdffae808cd1b74ec8b48cb5"
	I1124 04:09:23.190851  451987 cri.go:89] found id: "490bbd4ce436cb05c5881f746e8020778291193555fa5f32a43ae3598eddbd0d"
	I1124 04:09:23.190884  451987 cri.go:89] found id: "c8a48af8745d6e7ed60d98860bbdd720ffd910fb5f4ca540179f0c1ded57c194"
	I1124 04:09:23.190911  451987 cri.go:89] found id: "11beefbf1a8a00864fe3381844d37aedc1f9ce9831b394bb7bdc1ded4239d89e"
	I1124 04:09:23.190929  451987 cri.go:89] found id: "c7f699f1cc0ef484fe9224d86d2fb5cdd924b5f1e89ed3e12a85d92c72cc378e"
	I1124 04:09:23.190950  451987 cri.go:89] found id: "d013da0147b8934c86c12720efd80ec59f61d17784274267bb11bc96acac4c94"
	I1124 04:09:23.190984  451987 cri.go:89] found id: "5b712eafb8e889f1f03dcbca8e8cff4e20805686a79699de0f7ac21affd0b9f9"
	I1124 04:09:23.191008  451987 cri.go:89] found id: "94b696efd6b4382c567f3d5ef6f1fd9532fd20a48c4c2f66e053c1586cb5b17e"
	I1124 04:09:23.191026  451987 cri.go:89] found id: ""
	I1124 04:09:23.191110  451987 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 04:09:23.211676  451987 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:09:23Z" level=error msg="open /run/runc: no such file or directory"
	I1124 04:09:23.211754  451987 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 04:09:23.221439  451987 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 04:09:23.221509  451987 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 04:09:23.221602  451987 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 04:09:23.236200  451987 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 04:09:23.236878  451987 kubeconfig.go:125] found "pause-396108" server: "https://192.168.85.2:8443"
	I1124 04:09:23.237736  451987 kapi.go:59] client config for pause-396108: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21975-289526/.minikube/profiles/pause-396108/client.crt", KeyFile:"/home/jenkins/minikube-integration/21975-289526/.minikube/profiles/pause-396108/client.key", CAFile:"/home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 04:09:23.238297  451987 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1124 04:09:23.238545  451987 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1124 04:09:23.238571  451987 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1124 04:09:23.238617  451987 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1124 04:09:23.238645  451987 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1124 04:09:23.238962  451987 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 04:09:23.251749  451987 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1124 04:09:23.251831  451987 kubeadm.go:602] duration metric: took 30.301581ms to restartPrimaryControlPlane
	I1124 04:09:23.251856  451987 kubeadm.go:403] duration metric: took 120.528069ms to StartCluster
	I1124 04:09:23.251898  451987 settings.go:142] acquiring lock: {Name:mkbb4fe81234aed150705ba63a85f83232f71688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:09:23.251985  451987 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:09:23.252821  451987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/kubeconfig: {Name:mkb927d54c1c5489b1562496c31c4a11f46f8c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:09:23.253095  451987 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 04:09:23.253460  451987 config.go:182] Loaded profile config "pause-396108": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:09:23.253613  451987 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 04:09:23.257009  451987 out.go:179] * Verifying Kubernetes components...
	I1124 04:09:23.257106  451987 out.go:179] * Enabled addons: 
	I1124 04:09:20.558358  437115 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.072222249s)
	W1124 04:09:20.558401  437115 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1124 04:09:20.558410  437115 logs.go:123] Gathering logs for kube-apiserver [171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1] ...
	I1124 04:09:20.558422  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1"
	I1124 04:09:20.600811  437115 logs.go:123] Gathering logs for kube-scheduler [25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87] ...
	I1124 04:09:20.600887  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87"
	I1124 04:09:20.697809  437115 logs.go:123] Gathering logs for kube-controller-manager [01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c] ...
	I1124 04:09:20.697854  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c"
	W1124 04:09:20.745018  437115 logs.go:130] failed kube-controller-manager [01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c": Process exited with status 1
	stdout:
	
	stderr:
	E1124 04:09:20.742195    4120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c\": container with ID starting with 01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c not found: ID does not exist" containerID="01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c"
	time="2025-11-24T04:09:20Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c\": container with ID starting with 01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c not found: ID does not exist"
	 output: 
	** stderr ** 
	E1124 04:09:20.742195    4120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c\": container with ID starting with 01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c not found: ID does not exist" containerID="01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c"
	time="2025-11-24T04:09:20Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c\": container with ID starting with 01a12298561208a7773d9dae6e7d45c46b42d92563f73b6a6db37ff2d05b7b8c not found: ID does not exist"
	
	** /stderr **
	I1124 04:09:20.745044  437115 logs.go:123] Gathering logs for CRI-O ...
	I1124 04:09:20.745063  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 04:09:20.822834  437115 logs.go:123] Gathering logs for kube-apiserver [c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9] ...
	I1124 04:09:20.822889  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9"
	I1124 04:09:20.862884  437115 logs.go:123] Gathering logs for container status ...
	I1124 04:09:20.862924  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 04:09:20.915684  437115 logs.go:123] Gathering logs for kubelet ...
	I1124 04:09:20.915714  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 04:09:21.053711  437115 logs.go:123] Gathering logs for dmesg ...
	I1124 04:09:21.053747  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 04:09:23.575091  437115 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 04:09:24.911573  437115 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:49602->192.168.76.2:8443: read: connection reset by peer
	I1124 04:09:24.911628  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 04:09:24.911688  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 04:09:24.968564  437115 cri.go:89] found id: "171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1"
	I1124 04:09:24.968584  437115 cri.go:89] found id: "c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9"
	I1124 04:09:24.968589  437115 cri.go:89] found id: ""
	I1124 04:09:24.968597  437115 logs.go:282] 2 containers: [171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1 c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9]
	I1124 04:09:24.968653  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:24.974792  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:24.982271  437115 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 04:09:24.982342  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 04:09:25.040923  437115 cri.go:89] found id: ""
	I1124 04:09:25.040946  437115 logs.go:282] 0 containers: []
	W1124 04:09:25.040954  437115 logs.go:284] No container was found matching "etcd"
	I1124 04:09:25.040960  437115 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 04:09:25.041019  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 04:09:25.109589  437115 cri.go:89] found id: ""
	I1124 04:09:25.109623  437115 logs.go:282] 0 containers: []
	W1124 04:09:25.109632  437115 logs.go:284] No container was found matching "coredns"
	I1124 04:09:25.109638  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 04:09:25.109699  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 04:09:25.154610  437115 cri.go:89] found id: "25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87"
	I1124 04:09:25.154692  437115 cri.go:89] found id: ""
	I1124 04:09:25.154717  437115 logs.go:282] 1 containers: [25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87]
	I1124 04:09:25.154812  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:25.163612  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 04:09:25.163684  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 04:09:25.201515  437115 cri.go:89] found id: ""
	I1124 04:09:25.201543  437115 logs.go:282] 0 containers: []
	W1124 04:09:25.201553  437115 logs.go:284] No container was found matching "kube-proxy"
	I1124 04:09:25.201559  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 04:09:25.201630  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 04:09:25.248296  437115 cri.go:89] found id: "c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32"
	I1124 04:09:25.248316  437115 cri.go:89] found id: ""
	I1124 04:09:25.248323  437115 logs.go:282] 1 containers: [c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32]
	I1124 04:09:25.248379  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:25.252625  437115 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 04:09:25.252697  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 04:09:25.308993  437115 cri.go:89] found id: ""
	I1124 04:09:25.309017  437115 logs.go:282] 0 containers: []
	W1124 04:09:25.309028  437115 logs.go:284] No container was found matching "kindnet"
	I1124 04:09:25.309034  437115 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 04:09:25.309098  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 04:09:25.364253  437115 cri.go:89] found id: ""
	I1124 04:09:25.364331  437115 logs.go:282] 0 containers: []
	W1124 04:09:25.364343  437115 logs.go:284] No container was found matching "storage-provisioner"
	I1124 04:09:25.364358  437115 logs.go:123] Gathering logs for dmesg ...
	I1124 04:09:25.364371  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 04:09:25.388044  437115 logs.go:123] Gathering logs for describe nodes ...
	I1124 04:09:25.388129  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 04:09:25.506329  437115 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 04:09:25.506349  437115 logs.go:123] Gathering logs for kube-apiserver [171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1] ...
	I1124 04:09:25.506365  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1"
	I1124 04:09:23.259960  451987 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:09:23.260086  451987 addons.go:530] duration metric: took 6.475851ms for enable addons: enabled=[]
	I1124 04:09:23.523972  451987 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:09:23.540488  451987 node_ready.go:35] waiting up to 6m0s for node "pause-396108" to be "Ready" ...
	I1124 04:09:26.564421  451987 node_ready.go:49] node "pause-396108" is "Ready"
	I1124 04:09:26.564448  451987 node_ready.go:38] duration metric: took 3.023906913s for node "pause-396108" to be "Ready" ...
	I1124 04:09:26.564461  451987 api_server.go:52] waiting for apiserver process to appear ...
	I1124 04:09:26.564521  451987 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 04:09:26.582528  451987 api_server.go:72] duration metric: took 3.329374699s to wait for apiserver process to appear ...
	I1124 04:09:26.582553  451987 api_server.go:88] waiting for apiserver healthz status ...
	I1124 04:09:26.582572  451987 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 04:09:26.646180  451987 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 04:09:26.646266  451987 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 04:09:27.082701  451987 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 04:09:27.095526  451987 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 04:09:27.095609  451987 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 04:09:27.583268  451987 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 04:09:27.591393  451987 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 04:09:27.592474  451987 api_server.go:141] control plane version: v1.34.1
	I1124 04:09:27.592499  451987 api_server.go:131] duration metric: took 1.009938777s to wait for apiserver health ...
	I1124 04:09:27.592507  451987 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 04:09:27.595826  451987 system_pods.go:59] 7 kube-system pods found
	I1124 04:09:27.595866  451987 system_pods.go:61] "coredns-66bc5c9577-xfr6t" [fd71cd99-c8ae-4289-91db-9e0d7fe80820] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:09:27.595876  451987 system_pods.go:61] "etcd-pause-396108" [8cf88e79-6c75-41a5-8054-64ab30eed960] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 04:09:27.595881  451987 system_pods.go:61] "kindnet-mfqdh" [bb8e0be7-54b6-4486-9171-829f7caa1732] Running
	I1124 04:09:27.595889  451987 system_pods.go:61] "kube-apiserver-pause-396108" [cfca4498-6686-4773-8518-85bed07245bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 04:09:27.595900  451987 system_pods.go:61] "kube-controller-manager-pause-396108" [929bd265-8894-4dea-aada-003d5f8bb490] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 04:09:27.595909  451987 system_pods.go:61] "kube-proxy-scjq4" [55cb7252-6c8b-4499-8353-ffca1a4f06d1] Running
	I1124 04:09:27.595915  451987 system_pods.go:61] "kube-scheduler-pause-396108" [e5874182-3ff1-46f0-9d2f-38553f584bd3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 04:09:27.595922  451987 system_pods.go:74] duration metric: took 3.407709ms to wait for pod list to return data ...
	I1124 04:09:27.595931  451987 default_sa.go:34] waiting for default service account to be created ...
	I1124 04:09:27.600102  451987 default_sa.go:45] found service account: "default"
	I1124 04:09:27.600131  451987 default_sa.go:55] duration metric: took 4.190835ms for default service account to be created ...
	I1124 04:09:27.600142  451987 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 04:09:27.603016  451987 system_pods.go:86] 7 kube-system pods found
	I1124 04:09:27.603048  451987 system_pods.go:89] "coredns-66bc5c9577-xfr6t" [fd71cd99-c8ae-4289-91db-9e0d7fe80820] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:09:27.603061  451987 system_pods.go:89] "etcd-pause-396108" [8cf88e79-6c75-41a5-8054-64ab30eed960] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 04:09:27.603067  451987 system_pods.go:89] "kindnet-mfqdh" [bb8e0be7-54b6-4486-9171-829f7caa1732] Running
	I1124 04:09:27.603073  451987 system_pods.go:89] "kube-apiserver-pause-396108" [cfca4498-6686-4773-8518-85bed07245bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 04:09:27.603080  451987 system_pods.go:89] "kube-controller-manager-pause-396108" [929bd265-8894-4dea-aada-003d5f8bb490] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 04:09:27.603085  451987 system_pods.go:89] "kube-proxy-scjq4" [55cb7252-6c8b-4499-8353-ffca1a4f06d1] Running
	I1124 04:09:27.603099  451987 system_pods.go:89] "kube-scheduler-pause-396108" [e5874182-3ff1-46f0-9d2f-38553f584bd3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 04:09:27.603114  451987 system_pods.go:126] duration metric: took 2.964813ms to wait for k8s-apps to be running ...
	I1124 04:09:27.603122  451987 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 04:09:27.603181  451987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:09:27.616923  451987 system_svc.go:56] duration metric: took 13.791257ms WaitForService to wait for kubelet
	I1124 04:09:27.616993  451987 kubeadm.go:587] duration metric: took 4.363843594s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 04:09:27.617026  451987 node_conditions.go:102] verifying NodePressure condition ...
	I1124 04:09:27.619863  451987 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 04:09:27.619904  451987 node_conditions.go:123] node cpu capacity is 2
	I1124 04:09:27.619917  451987 node_conditions.go:105] duration metric: took 2.873523ms to run NodePressure ...
	I1124 04:09:27.619929  451987 start.go:242] waiting for startup goroutines ...
	I1124 04:09:27.619937  451987 start.go:247] waiting for cluster config update ...
	I1124 04:09:27.619945  451987 start.go:256] writing updated cluster config ...
	I1124 04:09:27.620265  451987 ssh_runner.go:195] Run: rm -f paused
	I1124 04:09:27.623675  451987 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 04:09:27.624315  451987 kapi.go:59] client config for pause-396108: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21975-289526/.minikube/profiles/pause-396108/client.crt", KeyFile:"/home/jenkins/minikube-integration/21975-289526/.minikube/profiles/pause-396108/client.key", CAFile:"/home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 04:09:27.627288  451987 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xfr6t" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:09:25.564193  437115 logs.go:123] Gathering logs for kube-apiserver [c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9] ...
	I1124 04:09:25.564269  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9"
	W1124 04:09:25.613626  437115 logs.go:130] failed kube-apiserver [c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9": Process exited with status 1
	stdout:
	
	stderr:
	E1124 04:09:25.604526    4237 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9\": container with ID starting with c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9 not found: ID does not exist" containerID="c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9"
	time="2025-11-24T04:09:25Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9\": container with ID starting with c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1124 04:09:25.604526    4237 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9\": container with ID starting with c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9 not found: ID does not exist" containerID="c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9"
	time="2025-11-24T04:09:25Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9\": container with ID starting with c62f247880aeb558146c4ec380b4a601b408e83750b8a4f01d56a7be94c58da9 not found: ID does not exist"
	
	** /stderr **
	I1124 04:09:25.613646  437115 logs.go:123] Gathering logs for kube-scheduler [25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87] ...
	I1124 04:09:25.613658  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87"
	I1124 04:09:25.733745  437115 logs.go:123] Gathering logs for kube-controller-manager [c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32] ...
	I1124 04:09:25.733827  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32"
	I1124 04:09:25.779622  437115 logs.go:123] Gathering logs for CRI-O ...
	I1124 04:09:25.779658  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 04:09:25.883437  437115 logs.go:123] Gathering logs for container status ...
	I1124 04:09:25.883530  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 04:09:25.938708  437115 logs.go:123] Gathering logs for kubelet ...
	I1124 04:09:25.938737  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 04:09:28.595176  437115 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 04:09:28.595637  437115 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 04:09:28.595702  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 04:09:28.595779  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 04:09:28.626849  437115 cri.go:89] found id: "171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1"
	I1124 04:09:28.626877  437115 cri.go:89] found id: ""
	I1124 04:09:28.626892  437115 logs.go:282] 1 containers: [171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1]
	I1124 04:09:28.626949  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:28.634029  437115 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 04:09:28.634100  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 04:09:28.663129  437115 cri.go:89] found id: ""
	I1124 04:09:28.663157  437115 logs.go:282] 0 containers: []
	W1124 04:09:28.663166  437115 logs.go:284] No container was found matching "etcd"
	I1124 04:09:28.663172  437115 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 04:09:28.663232  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 04:09:28.688278  437115 cri.go:89] found id: ""
	I1124 04:09:28.688306  437115 logs.go:282] 0 containers: []
	W1124 04:09:28.688316  437115 logs.go:284] No container was found matching "coredns"
	I1124 04:09:28.688323  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 04:09:28.688383  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 04:09:28.721734  437115 cri.go:89] found id: "25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87"
	I1124 04:09:28.721758  437115 cri.go:89] found id: ""
	I1124 04:09:28.721767  437115 logs.go:282] 1 containers: [25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87]
	I1124 04:09:28.721833  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:28.725914  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 04:09:28.725987  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 04:09:28.757735  437115 cri.go:89] found id: ""
	I1124 04:09:28.757756  437115 logs.go:282] 0 containers: []
	W1124 04:09:28.757764  437115 logs.go:284] No container was found matching "kube-proxy"
	I1124 04:09:28.757769  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 04:09:28.757852  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 04:09:28.783062  437115 cri.go:89] found id: "c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32"
	I1124 04:09:28.783084  437115 cri.go:89] found id: ""
	I1124 04:09:28.783093  437115 logs.go:282] 1 containers: [c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32]
	I1124 04:09:28.783148  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:28.786856  437115 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 04:09:28.786971  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 04:09:28.818146  437115 cri.go:89] found id: ""
	I1124 04:09:28.818172  437115 logs.go:282] 0 containers: []
	W1124 04:09:28.818181  437115 logs.go:284] No container was found matching "kindnet"
	I1124 04:09:28.818187  437115 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 04:09:28.818247  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 04:09:28.844007  437115 cri.go:89] found id: ""
	I1124 04:09:28.844032  437115 logs.go:282] 0 containers: []
	W1124 04:09:28.844041  437115 logs.go:284] No container was found matching "storage-provisioner"
	I1124 04:09:28.844050  437115 logs.go:123] Gathering logs for container status ...
	I1124 04:09:28.844081  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 04:09:28.885287  437115 logs.go:123] Gathering logs for kubelet ...
	I1124 04:09:28.885322  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 04:09:29.022094  437115 logs.go:123] Gathering logs for dmesg ...
	I1124 04:09:29.022138  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 04:09:29.053406  437115 logs.go:123] Gathering logs for describe nodes ...
	I1124 04:09:29.053438  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 04:09:29.150799  437115 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 04:09:29.150868  437115 logs.go:123] Gathering logs for kube-apiserver [171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1] ...
	I1124 04:09:29.150897  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1"
	I1124 04:09:29.198785  437115 logs.go:123] Gathering logs for kube-scheduler [25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87] ...
	I1124 04:09:29.198817  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87"
	I1124 04:09:29.263761  437115 logs.go:123] Gathering logs for kube-controller-manager [c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32] ...
	I1124 04:09:29.263800  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32"
	I1124 04:09:29.291266  437115 logs.go:123] Gathering logs for CRI-O ...
	I1124 04:09:29.291293  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1124 04:09:29.634491  451987 pod_ready.go:104] pod "coredns-66bc5c9577-xfr6t" is not "Ready", error: <nil>
	W1124 04:09:32.134245  451987 pod_ready.go:104] pod "coredns-66bc5c9577-xfr6t" is not "Ready", error: <nil>
	I1124 04:09:31.859503  437115 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 04:09:31.859955  437115 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 04:09:31.860003  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 04:09:31.860064  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 04:09:31.886133  437115 cri.go:89] found id: "171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1"
	I1124 04:09:31.886153  437115 cri.go:89] found id: ""
	I1124 04:09:31.886161  437115 logs.go:282] 1 containers: [171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1]
	I1124 04:09:31.886220  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:31.889849  437115 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 04:09:31.889920  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 04:09:31.920033  437115 cri.go:89] found id: ""
	I1124 04:09:31.920064  437115 logs.go:282] 0 containers: []
	W1124 04:09:31.920077  437115 logs.go:284] No container was found matching "etcd"
	I1124 04:09:31.920083  437115 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 04:09:31.920144  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 04:09:31.946194  437115 cri.go:89] found id: ""
	I1124 04:09:31.946217  437115 logs.go:282] 0 containers: []
	W1124 04:09:31.946225  437115 logs.go:284] No container was found matching "coredns"
	I1124 04:09:31.946231  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 04:09:31.946288  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 04:09:31.976121  437115 cri.go:89] found id: "25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87"
	I1124 04:09:31.976191  437115 cri.go:89] found id: ""
	I1124 04:09:31.976215  437115 logs.go:282] 1 containers: [25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87]
	I1124 04:09:31.976291  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:31.980158  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 04:09:31.980246  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 04:09:32.009856  437115 cri.go:89] found id: ""
	I1124 04:09:32.009881  437115 logs.go:282] 0 containers: []
	W1124 04:09:32.009890  437115 logs.go:284] No container was found matching "kube-proxy"
	I1124 04:09:32.009896  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 04:09:32.009986  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 04:09:32.038205  437115 cri.go:89] found id: "c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32"
	I1124 04:09:32.038230  437115 cri.go:89] found id: ""
	I1124 04:09:32.038240  437115 logs.go:282] 1 containers: [c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32]
	I1124 04:09:32.038303  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:32.042322  437115 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 04:09:32.042497  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 04:09:32.069834  437115 cri.go:89] found id: ""
	I1124 04:09:32.069863  437115 logs.go:282] 0 containers: []
	W1124 04:09:32.069873  437115 logs.go:284] No container was found matching "kindnet"
	I1124 04:09:32.069879  437115 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 04:09:32.069944  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 04:09:32.097573  437115 cri.go:89] found id: ""
	I1124 04:09:32.097601  437115 logs.go:282] 0 containers: []
	W1124 04:09:32.097611  437115 logs.go:284] No container was found matching "storage-provisioner"
	I1124 04:09:32.097621  437115 logs.go:123] Gathering logs for CRI-O ...
	I1124 04:09:32.097638  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 04:09:32.159073  437115 logs.go:123] Gathering logs for container status ...
	I1124 04:09:32.159108  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 04:09:32.192530  437115 logs.go:123] Gathering logs for kubelet ...
	I1124 04:09:32.192560  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 04:09:32.309268  437115 logs.go:123] Gathering logs for dmesg ...
	I1124 04:09:32.309310  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 04:09:32.325747  437115 logs.go:123] Gathering logs for describe nodes ...
	I1124 04:09:32.325778  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 04:09:32.396957  437115 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 04:09:32.397020  437115 logs.go:123] Gathering logs for kube-apiserver [171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1] ...
	I1124 04:09:32.397041  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1"
	I1124 04:09:32.434948  437115 logs.go:123] Gathering logs for kube-scheduler [25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87] ...
	I1124 04:09:32.434982  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87"
	I1124 04:09:32.501225  437115 logs.go:123] Gathering logs for kube-controller-manager [c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32] ...
	I1124 04:09:32.501267  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32"
	I1124 04:09:35.030519  437115 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 04:09:35.031012  437115 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 04:09:35.031061  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 04:09:35.031119  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 04:09:35.059986  437115 cri.go:89] found id: "171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1"
	I1124 04:09:35.060009  437115 cri.go:89] found id: ""
	I1124 04:09:35.060018  437115 logs.go:282] 1 containers: [171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1]
	I1124 04:09:35.060079  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:35.064189  437115 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 04:09:35.064267  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 04:09:35.091210  437115 cri.go:89] found id: ""
	I1124 04:09:35.091237  437115 logs.go:282] 0 containers: []
	W1124 04:09:35.091254  437115 logs.go:284] No container was found matching "etcd"
	I1124 04:09:35.091260  437115 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 04:09:35.091321  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 04:09:35.123976  437115 cri.go:89] found id: ""
	I1124 04:09:35.123999  437115 logs.go:282] 0 containers: []
	W1124 04:09:35.124007  437115 logs.go:284] No container was found matching "coredns"
	I1124 04:09:35.124013  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 04:09:35.124071  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 04:09:35.159072  437115 cri.go:89] found id: "25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87"
	I1124 04:09:35.159093  437115 cri.go:89] found id: ""
	I1124 04:09:35.159101  437115 logs.go:282] 1 containers: [25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87]
	I1124 04:09:35.159157  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:35.163073  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 04:09:35.163151  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 04:09:35.189626  437115 cri.go:89] found id: ""
	I1124 04:09:35.189657  437115 logs.go:282] 0 containers: []
	W1124 04:09:35.189667  437115 logs.go:284] No container was found matching "kube-proxy"
	I1124 04:09:35.189673  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 04:09:35.189734  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 04:09:35.215678  437115 cri.go:89] found id: "c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32"
	I1124 04:09:35.215705  437115 cri.go:89] found id: ""
	I1124 04:09:35.215715  437115 logs.go:282] 1 containers: [c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32]
	I1124 04:09:35.215775  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:35.219722  437115 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 04:09:35.219828  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 04:09:35.253188  437115 cri.go:89] found id: ""
	I1124 04:09:35.253215  437115 logs.go:282] 0 containers: []
	W1124 04:09:35.253224  437115 logs.go:284] No container was found matching "kindnet"
	I1124 04:09:35.253231  437115 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 04:09:35.253294  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 04:09:35.279170  437115 cri.go:89] found id: ""
	I1124 04:09:35.279196  437115 logs.go:282] 0 containers: []
	W1124 04:09:35.279206  437115 logs.go:284] No container was found matching "storage-provisioner"
	I1124 04:09:35.279216  437115 logs.go:123] Gathering logs for dmesg ...
	I1124 04:09:35.279228  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 04:09:35.295680  437115 logs.go:123] Gathering logs for describe nodes ...
	I1124 04:09:35.295712  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 04:09:35.366375  437115 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 04:09:35.366399  437115 logs.go:123] Gathering logs for kube-apiserver [171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1] ...
	I1124 04:09:35.366417  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1"
	I1124 04:09:35.398651  437115 logs.go:123] Gathering logs for kube-scheduler [25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87] ...
	I1124 04:09:35.398683  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87"
	I1124 04:09:35.464542  437115 logs.go:123] Gathering logs for kube-controller-manager [c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32] ...
	I1124 04:09:35.464579  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32"
	I1124 04:09:35.494347  437115 logs.go:123] Gathering logs for CRI-O ...
	I1124 04:09:35.494376  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 04:09:34.133423  451987 pod_ready.go:94] pod "coredns-66bc5c9577-xfr6t" is "Ready"
	I1124 04:09:34.133511  451987 pod_ready.go:86] duration metric: took 6.506197483s for pod "coredns-66bc5c9577-xfr6t" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:09:34.136671  451987 pod_ready.go:83] waiting for pod "etcd-pause-396108" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:09:36.142219  451987 pod_ready.go:94] pod "etcd-pause-396108" is "Ready"
	I1124 04:09:36.142247  451987 pod_ready.go:86] duration metric: took 2.005549635s for pod "etcd-pause-396108" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:09:36.144567  451987 pod_ready.go:83] waiting for pod "kube-apiserver-pause-396108" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:09:36.149525  451987 pod_ready.go:94] pod "kube-apiserver-pause-396108" is "Ready"
	I1124 04:09:36.149555  451987 pod_ready.go:86] duration metric: took 4.958896ms for pod "kube-apiserver-pause-396108" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:09:36.151929  451987 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-396108" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:09:35.557314  437115 logs.go:123] Gathering logs for container status ...
	I1124 04:09:35.557348  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 04:09:35.591325  437115 logs.go:123] Gathering logs for kubelet ...
	I1124 04:09:35.591356  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 04:09:38.212664  437115 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 04:09:38.213101  437115 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 04:09:38.213164  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 04:09:38.213240  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 04:09:38.239721  437115 cri.go:89] found id: "171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1"
	I1124 04:09:38.239743  437115 cri.go:89] found id: ""
	I1124 04:09:38.239752  437115 logs.go:282] 1 containers: [171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1]
	I1124 04:09:38.239812  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:38.243591  437115 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 04:09:38.243673  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 04:09:38.270346  437115 cri.go:89] found id: ""
	I1124 04:09:38.270370  437115 logs.go:282] 0 containers: []
	W1124 04:09:38.270379  437115 logs.go:284] No container was found matching "etcd"
	I1124 04:09:38.270386  437115 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 04:09:38.270444  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 04:09:38.296322  437115 cri.go:89] found id: ""
	I1124 04:09:38.296346  437115 logs.go:282] 0 containers: []
	W1124 04:09:38.296355  437115 logs.go:284] No container was found matching "coredns"
	I1124 04:09:38.296361  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 04:09:38.296422  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 04:09:38.322556  437115 cri.go:89] found id: "25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87"
	I1124 04:09:38.322586  437115 cri.go:89] found id: ""
	I1124 04:09:38.322596  437115 logs.go:282] 1 containers: [25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87]
	I1124 04:09:38.322651  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:38.326350  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 04:09:38.326427  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 04:09:38.354850  437115 cri.go:89] found id: ""
	I1124 04:09:38.354874  437115 logs.go:282] 0 containers: []
	W1124 04:09:38.354884  437115 logs.go:284] No container was found matching "kube-proxy"
	I1124 04:09:38.354891  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 04:09:38.354951  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 04:09:38.382626  437115 cri.go:89] found id: "c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32"
	I1124 04:09:38.382649  437115 cri.go:89] found id: ""
	I1124 04:09:38.382658  437115 logs.go:282] 1 containers: [c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32]
	I1124 04:09:38.382714  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:38.386591  437115 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 04:09:38.386727  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 04:09:38.413321  437115 cri.go:89] found id: ""
	I1124 04:09:38.413349  437115 logs.go:282] 0 containers: []
	W1124 04:09:38.413358  437115 logs.go:284] No container was found matching "kindnet"
	I1124 04:09:38.413370  437115 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 04:09:38.413434  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 04:09:38.447096  437115 cri.go:89] found id: ""
	I1124 04:09:38.447118  437115 logs.go:282] 0 containers: []
	W1124 04:09:38.447127  437115 logs.go:284] No container was found matching "storage-provisioner"
	I1124 04:09:38.447135  437115 logs.go:123] Gathering logs for describe nodes ...
	I1124 04:09:38.447147  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 04:09:38.517008  437115 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 04:09:38.517031  437115 logs.go:123] Gathering logs for kube-apiserver [171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1] ...
	I1124 04:09:38.517046  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1"
	I1124 04:09:38.556318  437115 logs.go:123] Gathering logs for kube-scheduler [25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87] ...
	I1124 04:09:38.556350  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87"
	I1124 04:09:38.616511  437115 logs.go:123] Gathering logs for kube-controller-manager [c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32] ...
	I1124 04:09:38.616548  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32"
	I1124 04:09:38.644755  437115 logs.go:123] Gathering logs for CRI-O ...
	I1124 04:09:38.644835  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 04:09:38.709999  437115 logs.go:123] Gathering logs for container status ...
	I1124 04:09:38.710039  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 04:09:38.741969  437115 logs.go:123] Gathering logs for kubelet ...
	I1124 04:09:38.741998  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 04:09:38.869515  437115 logs.go:123] Gathering logs for dmesg ...
	I1124 04:09:38.869563  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1124 04:09:38.157420  451987 pod_ready.go:104] pod "kube-controller-manager-pause-396108" is not "Ready", error: <nil>
	W1124 04:09:40.158124  451987 pod_ready.go:104] pod "kube-controller-manager-pause-396108" is not "Ready", error: <nil>
	I1124 04:09:41.657944  451987 pod_ready.go:94] pod "kube-controller-manager-pause-396108" is "Ready"
	I1124 04:09:41.657968  451987 pod_ready.go:86] duration metric: took 5.506012123s for pod "kube-controller-manager-pause-396108" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:09:41.660568  451987 pod_ready.go:83] waiting for pod "kube-proxy-scjq4" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:09:41.665187  451987 pod_ready.go:94] pod "kube-proxy-scjq4" is "Ready"
	I1124 04:09:41.665253  451987 pod_ready.go:86] duration metric: took 4.662768ms for pod "kube-proxy-scjq4" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:09:41.677066  451987 pod_ready.go:83] waiting for pod "kube-scheduler-pause-396108" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:09:41.693528  451987 pod_ready.go:94] pod "kube-scheduler-pause-396108" is "Ready"
	I1124 04:09:41.693552  451987 pod_ready.go:86] duration metric: took 16.463912ms for pod "kube-scheduler-pause-396108" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:09:41.693565  451987 pod_ready.go:40] duration metric: took 14.069859482s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 04:09:41.793418  451987 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 04:09:41.796352  451987 out.go:179] * Done! kubectl is now configured to use "pause-396108" cluster and "default" namespace by default
	I1124 04:09:41.388129  437115 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 04:09:41.388797  437115 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 04:09:41.388849  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 04:09:41.388907  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 04:09:41.427850  437115 cri.go:89] found id: "171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1"
	I1124 04:09:41.427917  437115 cri.go:89] found id: ""
	I1124 04:09:41.427933  437115 logs.go:282] 1 containers: [171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1]
	I1124 04:09:41.428005  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:41.431926  437115 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 04:09:41.432046  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 04:09:41.459594  437115 cri.go:89] found id: ""
	I1124 04:09:41.459621  437115 logs.go:282] 0 containers: []
	W1124 04:09:41.459630  437115 logs.go:284] No container was found matching "etcd"
	I1124 04:09:41.459636  437115 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 04:09:41.459696  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 04:09:41.487107  437115 cri.go:89] found id: ""
	I1124 04:09:41.487176  437115 logs.go:282] 0 containers: []
	W1124 04:09:41.487199  437115 logs.go:284] No container was found matching "coredns"
	I1124 04:09:41.487218  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 04:09:41.487306  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 04:09:41.515020  437115 cri.go:89] found id: "25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87"
	I1124 04:09:41.515045  437115 cri.go:89] found id: ""
	I1124 04:09:41.515055  437115 logs.go:282] 1 containers: [25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87]
	I1124 04:09:41.515124  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:41.519770  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 04:09:41.519898  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 04:09:41.550538  437115 cri.go:89] found id: ""
	I1124 04:09:41.550574  437115 logs.go:282] 0 containers: []
	W1124 04:09:41.550583  437115 logs.go:284] No container was found matching "kube-proxy"
	I1124 04:09:41.550590  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 04:09:41.550661  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 04:09:41.576593  437115 cri.go:89] found id: "c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32"
	I1124 04:09:41.576616  437115 cri.go:89] found id: ""
	I1124 04:09:41.576625  437115 logs.go:282] 1 containers: [c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32]
	I1124 04:09:41.576685  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:41.580560  437115 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 04:09:41.580639  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 04:09:41.626055  437115 cri.go:89] found id: ""
	I1124 04:09:41.626080  437115 logs.go:282] 0 containers: []
	W1124 04:09:41.626089  437115 logs.go:284] No container was found matching "kindnet"
	I1124 04:09:41.626096  437115 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 04:09:41.626156  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 04:09:41.656617  437115 cri.go:89] found id: ""
	I1124 04:09:41.656647  437115 logs.go:282] 0 containers: []
	W1124 04:09:41.656667  437115 logs.go:284] No container was found matching "storage-provisioner"
	I1124 04:09:41.656678  437115 logs.go:123] Gathering logs for kubelet ...
	I1124 04:09:41.656688  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 04:09:41.851089  437115 logs.go:123] Gathering logs for dmesg ...
	I1124 04:09:41.851129  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 04:09:41.868288  437115 logs.go:123] Gathering logs for describe nodes ...
	I1124 04:09:41.868320  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 04:09:41.947561  437115 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 04:09:41.947585  437115 logs.go:123] Gathering logs for kube-apiserver [171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1] ...
	I1124 04:09:41.947599  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1"
	I1124 04:09:42.011107  437115 logs.go:123] Gathering logs for kube-scheduler [25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87] ...
	I1124 04:09:42.011150  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87"
	I1124 04:09:42.097030  437115 logs.go:123] Gathering logs for kube-controller-manager [c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32] ...
	I1124 04:09:42.097070  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32"
	I1124 04:09:42.138872  437115 logs.go:123] Gathering logs for CRI-O ...
	I1124 04:09:42.138907  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 04:09:42.226576  437115 logs.go:123] Gathering logs for container status ...
	I1124 04:09:42.226626  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 04:09:44.780944  437115 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 04:09:44.781356  437115 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 04:09:44.781394  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 04:09:44.781445  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 04:09:44.828249  437115 cri.go:89] found id: "171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1"
	I1124 04:09:44.828268  437115 cri.go:89] found id: ""
	I1124 04:09:44.828283  437115 logs.go:282] 1 containers: [171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1]
	I1124 04:09:44.828341  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:44.835785  437115 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 04:09:44.835859  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 04:09:44.874572  437115 cri.go:89] found id: ""
	I1124 04:09:44.874597  437115 logs.go:282] 0 containers: []
	W1124 04:09:44.874606  437115 logs.go:284] No container was found matching "etcd"
	I1124 04:09:44.874613  437115 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 04:09:44.874696  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 04:09:44.911718  437115 cri.go:89] found id: ""
	I1124 04:09:44.911746  437115 logs.go:282] 0 containers: []
	W1124 04:09:44.911758  437115 logs.go:284] No container was found matching "coredns"
	I1124 04:09:44.911764  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 04:09:44.911840  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 04:09:44.949325  437115 cri.go:89] found id: "25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87"
	I1124 04:09:44.949343  437115 cri.go:89] found id: ""
	I1124 04:09:44.949351  437115 logs.go:282] 1 containers: [25f783cb42458eefc611b3c08ac4755fd987edb55766b1e5ab4b7cff18252f87]
	I1124 04:09:44.949405  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:44.954721  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 04:09:44.954795  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 04:09:44.990544  437115 cri.go:89] found id: ""
	I1124 04:09:44.990565  437115 logs.go:282] 0 containers: []
	W1124 04:09:44.990573  437115 logs.go:284] No container was found matching "kube-proxy"
	I1124 04:09:44.990579  437115 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 04:09:44.990635  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 04:09:45.047109  437115 cri.go:89] found id: "c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32"
	I1124 04:09:45.047131  437115 cri.go:89] found id: ""
	I1124 04:09:45.047141  437115 logs.go:282] 1 containers: [c8b0c920dd08d8037aff291dd4989aea234883acffa258d757d4c1e31620be32]
	I1124 04:09:45.047207  437115 ssh_runner.go:195] Run: which crictl
	I1124 04:09:45.062216  437115 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 04:09:45.062309  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 04:09:45.157089  437115 cri.go:89] found id: ""
	I1124 04:09:45.157181  437115 logs.go:282] 0 containers: []
	W1124 04:09:45.157208  437115 logs.go:284] No container was found matching "kindnet"
	I1124 04:09:45.157231  437115 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 04:09:45.157361  437115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 04:09:45.200721  437115 cri.go:89] found id: ""
	I1124 04:09:45.200806  437115 logs.go:282] 0 containers: []
	W1124 04:09:45.200834  437115 logs.go:284] No container was found matching "storage-provisioner"
	I1124 04:09:45.200879  437115 logs.go:123] Gathering logs for container status ...
	I1124 04:09:45.200915  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 04:09:45.262835  437115 logs.go:123] Gathering logs for kubelet ...
	I1124 04:09:45.262865  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 04:09:45.405565  437115 logs.go:123] Gathering logs for dmesg ...
	I1124 04:09:45.405644  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 04:09:45.427848  437115 logs.go:123] Gathering logs for describe nodes ...
	I1124 04:09:45.427876  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 04:09:45.522497  437115 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 04:09:45.522580  437115 logs.go:123] Gathering logs for kube-apiserver [171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1] ...
	I1124 04:09:45.522664  437115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 171d3492e49f65dd537a6a21f35504a56b3e70d0e075f099c195d0bf74ef9af1"
	
	
	==> CRI-O <==
	Nov 24 04:09:22 pause-396108 crio[2069]: time="2025-11-24T04:09:22.025833799Z" level=info msg="Started container" PID=2217 containerID=5774c046abe9043aadbfd6b9c4831cb9ed1b5f813e056e0057a80407ea6f6d2f description=kube-system/coredns-66bc5c9577-xfr6t/coredns id=ccb5055d-f8fd-481a-9883-bd0890d41282 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4840c1c8edc3b4d01ae73cc6e1cf4fc0e1670d5b6a16d2e31fbbbaa140221352
	Nov 24 04:09:22 pause-396108 crio[2069]: time="2025-11-24T04:09:22.029977126Z" level=info msg="Created container e043bea1c710a5ce21da1af2f69f48cc7f408d94c02ad317ef742420e6047668: kube-system/kube-scheduler-pause-396108/kube-scheduler" id=e3ac6885-03ab-44c9-852b-3c24c77a3801 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:09:22 pause-396108 crio[2069]: time="2025-11-24T04:09:22.030596877Z" level=info msg="Starting container: e043bea1c710a5ce21da1af2f69f48cc7f408d94c02ad317ef742420e6047668" id=d506ed95-3217-4264-b49b-dfec18cb4619 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:09:22 pause-396108 crio[2069]: time="2025-11-24T04:09:22.036393835Z" level=info msg="Created container 1b824daf1dec3f99a1d5c16e27c0cfc32418b57d8ec747134d3686b45610c51f: kube-system/etcd-pause-396108/etcd" id=a4f2fb11-190f-4ad3-a2f0-d83a1435350a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:09:22 pause-396108 crio[2069]: time="2025-11-24T04:09:22.037613965Z" level=info msg="Starting container: 1b824daf1dec3f99a1d5c16e27c0cfc32418b57d8ec747134d3686b45610c51f" id=49bc72bc-5d01-4367-9a8f-85c843fb34bf name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:09:22 pause-396108 crio[2069]: time="2025-11-24T04:09:22.049224658Z" level=info msg="Started container" PID=2222 containerID=1b824daf1dec3f99a1d5c16e27c0cfc32418b57d8ec747134d3686b45610c51f description=kube-system/etcd-pause-396108/etcd id=49bc72bc-5d01-4367-9a8f-85c843fb34bf name=/runtime.v1.RuntimeService/StartContainer sandboxID=c53edc38cd4aaed5adc34ba312db7610711e2b097c869bb7876b5d8602eb0493
	Nov 24 04:09:22 pause-396108 crio[2069]: time="2025-11-24T04:09:22.049788729Z" level=info msg="Started container" PID=2219 containerID=e043bea1c710a5ce21da1af2f69f48cc7f408d94c02ad317ef742420e6047668 description=kube-system/kube-scheduler-pause-396108/kube-scheduler id=d506ed95-3217-4264-b49b-dfec18cb4619 name=/runtime.v1.RuntimeService/StartContainer sandboxID=26b0fa11607255a3e5608e30feda103f23f01293cb1c4a1084043152795c9a66
	Nov 24 04:09:22 pause-396108 crio[2069]: time="2025-11-24T04:09:22.090143613Z" level=info msg="Created container ec93103e13f8d61b3bbca237e6337d82343ed31e17f5875e3885f8f49c8ba154: kube-system/kube-controller-manager-pause-396108/kube-controller-manager" id=1327f501-2537-4ed0-91e6-704b59f99435 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:09:22 pause-396108 crio[2069]: time="2025-11-24T04:09:22.091462658Z" level=info msg="Starting container: ec93103e13f8d61b3bbca237e6337d82343ed31e17f5875e3885f8f49c8ba154" id=1dcadf34-3b05-4b92-99de-59245effb443 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:09:22 pause-396108 crio[2069]: time="2025-11-24T04:09:22.092672851Z" level=info msg="Created container 510de710da2ba8ea819f9bd7d6008fc188c474758dc0982bc622288bdaf88a09: kube-system/kindnet-mfqdh/kindnet-cni" id=4ffca681-ae0d-46e2-b80b-4a4fe2345794 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:09:22 pause-396108 crio[2069]: time="2025-11-24T04:09:22.093403332Z" level=info msg="Starting container: 510de710da2ba8ea819f9bd7d6008fc188c474758dc0982bc622288bdaf88a09" id=5928429e-6d96-42b5-85ae-483912530f11 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:09:22 pause-396108 crio[2069]: time="2025-11-24T04:09:22.094667492Z" level=info msg="Started container" PID=2260 containerID=ec93103e13f8d61b3bbca237e6337d82343ed31e17f5875e3885f8f49c8ba154 description=kube-system/kube-controller-manager-pause-396108/kube-controller-manager id=1dcadf34-3b05-4b92-99de-59245effb443 name=/runtime.v1.RuntimeService/StartContainer sandboxID=69b0953b6c60580f2d938e26502ad893c931d1f9d06d9c058be5b1d5502cc9b7
	Nov 24 04:09:22 pause-396108 crio[2069]: time="2025-11-24T04:09:22.102719024Z" level=info msg="Started container" PID=2250 containerID=510de710da2ba8ea819f9bd7d6008fc188c474758dc0982bc622288bdaf88a09 description=kube-system/kindnet-mfqdh/kindnet-cni id=5928429e-6d96-42b5-85ae-483912530f11 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dc12e43147e2d03874b3d464ffb4bb201a715b3e38cc7dd82a51041f4db807fd
	Nov 24 04:09:32 pause-396108 crio[2069]: time="2025-11-24T04:09:32.471046385Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:09:32 pause-396108 crio[2069]: time="2025-11-24T04:09:32.482341339Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:09:32 pause-396108 crio[2069]: time="2025-11-24T04:09:32.482595021Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:09:32 pause-396108 crio[2069]: time="2025-11-24T04:09:32.482682555Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:09:32 pause-396108 crio[2069]: time="2025-11-24T04:09:32.488679432Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:09:32 pause-396108 crio[2069]: time="2025-11-24T04:09:32.488849511Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:09:32 pause-396108 crio[2069]: time="2025-11-24T04:09:32.489028894Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:09:32 pause-396108 crio[2069]: time="2025-11-24T04:09:32.498885738Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:09:32 pause-396108 crio[2069]: time="2025-11-24T04:09:32.499061716Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:09:32 pause-396108 crio[2069]: time="2025-11-24T04:09:32.499146796Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:09:32 pause-396108 crio[2069]: time="2025-11-24T04:09:32.504757157Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:09:32 pause-396108 crio[2069]: time="2025-11-24T04:09:32.504924159Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	510de710da2ba       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   25 seconds ago       Running             kindnet-cni               1                   dc12e43147e2d       kindnet-mfqdh                          kube-system
	ec93103e13f8d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   25 seconds ago       Running             kube-controller-manager   1                   69b0953b6c605       kube-controller-manager-pause-396108   kube-system
	1b824daf1dec3       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   25 seconds ago       Running             etcd                      1                   c53edc38cd4aa       etcd-pause-396108                      kube-system
	9d6051a594d15       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   25 seconds ago       Running             kube-apiserver            1                   3888b10d33971       kube-apiserver-pause-396108            kube-system
	5774c046abe90       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   25 seconds ago       Running             coredns                   1                   4840c1c8edc3b       coredns-66bc5c9577-xfr6t               kube-system
	e043bea1c710a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   25 seconds ago       Running             kube-scheduler            1                   26b0fa1160725       kube-scheduler-pause-396108            kube-system
	848b7a5b1960b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   25 seconds ago       Running             kube-proxy                1                   9b2c9f5fdaf22       kube-proxy-scjq4                       kube-system
	490bbd4ce436c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   37 seconds ago       Exited              coredns                   0                   4840c1c8edc3b       coredns-66bc5c9577-xfr6t               kube-system
	c8a48af8745d6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   dc12e43147e2d       kindnet-mfqdh                          kube-system
	11beefbf1a8a0       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   9b2c9f5fdaf22       kube-proxy-scjq4                       kube-system
	c7f699f1cc0ef       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   26b0fa1160725       kube-scheduler-pause-396108            kube-system
	d013da0147b89       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   3888b10d33971       kube-apiserver-pause-396108            kube-system
	5b712eafb8e88       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   69b0953b6c605       kube-controller-manager-pause-396108   kube-system
	94b696efd6b43       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   c53edc38cd4aa       etcd-pause-396108                      kube-system
	
	
	==> coredns [490bbd4ce436cb05c5881f746e8020778291193555fa5f32a43ae3598eddbd0d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60143 - 40329 "HINFO IN 6168204137401324256.8467796082807871234. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023768706s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [5774c046abe9043aadbfd6b9c4831cb9ed1b5f813e056e0057a80407ea6f6d2f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60637 - 31050 "HINFO IN 8734035371615620278.953058895943700928. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013299819s
	
	
	==> describe nodes <==
	Name:               pause-396108
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-396108
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=pause-396108
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T04_08_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 04:08:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-396108
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 04:09:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 04:09:09 +0000   Mon, 24 Nov 2025 04:08:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 04:09:09 +0000   Mon, 24 Nov 2025 04:08:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 04:09:09 +0000   Mon, 24 Nov 2025 04:08:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 04:09:09 +0000   Mon, 24 Nov 2025 04:09:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-396108
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 304a86241bf1bbb85bd31db5692386d7
	  System UUID:                5e1abedc-8af0-4c26-815b-98375e2397ff
	  Boot ID:                    e6ca431c-3a35-478f-87f6-f49cc4bc8a65
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-xfr6t                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     79s
	  kube-system                 etcd-pause-396108                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         85s
	  kube-system                 kindnet-mfqdh                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      79s
	  kube-system                 kube-apiserver-pause-396108             250m (12%)    0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-controller-manager-pause-396108    200m (10%)    0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-proxy-scjq4                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-scheduler-pause-396108             100m (5%)     0 (0%)      0 (0%)           0 (0%)         86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 78s                kube-proxy       
	  Normal   Starting                 20s                kube-proxy       
	  Warning  CgroupV1                 93s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  93s (x8 over 93s)  kubelet          Node pause-396108 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    93s (x8 over 93s)  kubelet          Node pause-396108 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     93s (x8 over 93s)  kubelet          Node pause-396108 status is now: NodeHasSufficientPID
	  Normal   Starting                 85s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 85s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  85s                kubelet          Node pause-396108 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    85s                kubelet          Node pause-396108 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     85s                kubelet          Node pause-396108 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           81s                node-controller  Node pause-396108 event: Registered Node pause-396108 in Controller
	  Normal   NodeReady                38s                kubelet          Node pause-396108 status is now: NodeReady
	  Normal   RegisteredNode           18s                node-controller  Node pause-396108 event: Registered Node pause-396108 in Controller
	
	
	==> dmesg <==
	[ +25.584783] overlayfs: idmapped layers are currently not supported
	[Nov24 03:42] overlayfs: idmapped layers are currently not supported
	[Nov24 03:43] overlayfs: idmapped layers are currently not supported
	[  +2.949427] overlayfs: idmapped layers are currently not supported
	[Nov24 03:44] overlayfs: idmapped layers are currently not supported
	[Nov24 03:45] overlayfs: idmapped layers are currently not supported
	[Nov24 03:46] overlayfs: idmapped layers are currently not supported
	[Nov24 03:51] overlayfs: idmapped layers are currently not supported
	[ +32.185990] overlayfs: idmapped layers are currently not supported
	[Nov24 03:52] overlayfs: idmapped layers are currently not supported
	[Nov24 03:54] overlayfs: idmapped layers are currently not supported
	[Nov24 03:55] overlayfs: idmapped layers are currently not supported
	[Nov24 03:56] overlayfs: idmapped layers are currently not supported
	[Nov24 03:57] overlayfs: idmapped layers are currently not supported
	[  +3.077077] overlayfs: idmapped layers are currently not supported
	[Nov24 03:58] overlayfs: idmapped layers are currently not supported
	[ +17.825126] overlayfs: idmapped layers are currently not supported
	[Nov24 03:59] overlayfs: idmapped layers are currently not supported
	[ +28.164324] overlayfs: idmapped layers are currently not supported
	[Nov24 04:01] overlayfs: idmapped layers are currently not supported
	[Nov24 04:02] overlayfs: idmapped layers are currently not supported
	[Nov24 04:04] overlayfs: idmapped layers are currently not supported
	[Nov24 04:05] overlayfs: idmapped layers are currently not supported
	[Nov24 04:06] overlayfs: idmapped layers are currently not supported
	[Nov24 04:08] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1b824daf1dec3f99a1d5c16e27c0cfc32418b57d8ec747134d3686b45610c51f] <==
	{"level":"warn","ts":"2025-11-24T04:09:24.413000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.419591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.442949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.467827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.493026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.500963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.539570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.576489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.581252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.608421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.631036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.658522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.672966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.690512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.705836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.726560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.771259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.776717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.810689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.832515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.853653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.898899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.911202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:24.937078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:09:25.018872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44918","server-name":"","error":"EOF"}
	
	
	==> etcd [94b696efd6b4382c567f3d5ef6f1fd9532fd20a48c4c2f66e053c1586cb5b17e] <==
	{"level":"warn","ts":"2025-11-24T04:08:18.842191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:08:18.856256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:08:18.880240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:08:18.908822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:08:18.932406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:08:18.951020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:08:19.046343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39712","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T04:09:14.435486Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-24T04:09:14.435569Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-396108","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-24T04:09:14.435724Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T04:09:14.597603Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T04:09:14.597718Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T04:09:14.597769Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-11-24T04:09:14.597934Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-24T04:09:14.597951Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-24T04:09:14.598941Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T04:09:14.599104Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T04:09:14.599153Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-24T04:09:14.599056Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T04:09:14.599224Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T04:09:14.599268Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T04:09:14.601284Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-24T04:09:14.601359Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T04:09:14.601412Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-24T04:09:14.601421Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-396108","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 04:09:47 up  2:51,  0 user,  load average: 2.21, 2.55, 2.27
	Linux pause-396108 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [510de710da2ba8ea819f9bd7d6008fc188c474758dc0982bc622288bdaf88a09] <==
	I1124 04:09:22.252042       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 04:09:22.252425       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 04:09:22.256026       1 main.go:148] setting mtu 1500 for CNI 
	I1124 04:09:22.256120       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 04:09:22.256163       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T04:09:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 04:09:22.477751       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 04:09:22.477782       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 04:09:22.477792       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 04:09:22.477912       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 04:09:26.678398       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 04:09:26.678433       1 metrics.go:72] Registering metrics
	I1124 04:09:26.678578       1 controller.go:711] "Syncing nftables rules"
	I1124 04:09:32.470553       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 04:09:32.470704       1 main.go:301] handling current node
	I1124 04:09:42.464462       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 04:09:42.464531       1 main.go:301] handling current node
	
	
	==> kindnet [c8a48af8745d6e7ed60d98860bbdd720ffd910fb5f4ca540179f0c1ded57c194] <==
	I1124 04:08:28.727531       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 04:08:28.727915       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 04:08:28.728099       1 main.go:148] setting mtu 1500 for CNI 
	I1124 04:08:28.728112       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 04:08:28.728142       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T04:08:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 04:08:29.015692       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 04:08:29.015721       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 04:08:29.015731       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 04:08:29.016545       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 04:08:59.016097       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 04:08:59.016230       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1124 04:08:59.016315       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 04:08:59.017595       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1124 04:09:00.015979       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 04:09:00.016004       1 metrics.go:72] Registering metrics
	I1124 04:09:00.016091       1 controller.go:711] "Syncing nftables rules"
	I1124 04:09:09.022603       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 04:09:09.022653       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9d6051a594d1510f507c94cbd06679c59fda5ce1b35b721c4b945044a6c20cff] <==
	I1124 04:09:26.640698       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 04:09:26.641086       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1124 04:09:26.641144       1 policy_source.go:240] refreshing policies
	I1124 04:09:26.641923       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 04:09:26.641997       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 04:09:26.642108       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 04:09:26.642347       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 04:09:26.642395       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1124 04:09:26.642447       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 04:09:26.643980       1 aggregator.go:171] initial CRD sync complete...
	I1124 04:09:26.644398       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 04:09:26.644446       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 04:09:26.644477       1 cache.go:39] Caches are synced for autoregister controller
	I1124 04:09:26.645243       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 04:09:26.646076       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 04:09:26.646161       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 04:09:26.648235       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 04:09:26.657675       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1124 04:09:26.671349       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 04:09:27.276529       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 04:09:27.718043       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 04:09:29.125484       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 04:09:29.160641       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 04:09:29.410542       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 04:09:29.464073       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [d013da0147b8934c86c12720efd80ec59f61d17784274267bb11bc96acac4c94] <==
	W1124 04:09:14.472042       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.472179       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.472320       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.472467       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.472597       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.473013       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.473186       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.473325       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.473554       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.473888       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.474021       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.484306       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.484530       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.484685       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.484843       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.484990       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.485121       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.485387       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.485512       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.485650       1 logging.go:55] [core] [Channel #26 SubChannel #28]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.485720       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.485782       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.485859       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.485936       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 04:09:14.486006       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [5b712eafb8e889f1f03dcbca8e8cff4e20805686a79699de0f7ac21affd0b9f9] <==
	I1124 04:08:26.928195       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 04:08:26.929357       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 04:08:26.933337       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 04:08:26.933372       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 04:08:26.933423       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 04:08:26.933451       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 04:08:26.933456       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 04:08:26.933461       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 04:08:26.939043       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 04:08:26.939815       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 04:08:26.945544       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 04:08:26.957810       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 04:08:26.961490       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 04:08:26.964376       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 04:08:26.964397       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 04:08:26.964405       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 04:08:26.966515       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 04:08:26.966562       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 04:08:26.966714       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 04:08:26.966735       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 04:08:26.966934       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 04:08:26.967625       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 04:08:26.990946       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-396108" podCIDRs=["10.244.0.0/24"]
	I1124 04:08:26.991071       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 04:09:11.966539       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [ec93103e13f8d61b3bbca237e6337d82343ed31e17f5875e3885f8f49c8ba154] <==
	I1124 04:09:29.085773       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 04:09:29.086662       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 04:09:29.086667       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 04:09:29.094538       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 04:09:29.094634       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 04:09:29.095048       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 04:09:29.095097       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 04:09:29.095173       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 04:09:29.095312       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 04:09:29.098624       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 04:09:29.103308       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 04:09:29.103409       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 04:09:29.113060       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 04:09:29.113132       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 04:09:29.113204       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 04:09:29.113240       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 04:09:29.113252       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 04:09:29.113260       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 04:09:29.113333       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 04:09:29.113404       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 04:09:29.114290       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 04:09:29.114325       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 04:09:29.114340       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 04:09:29.139817       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 04:09:29.145299       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	
	
	==> kube-proxy [11beefbf1a8a00864fe3381844d37aedc1f9ce9831b394bb7bdc1ded4239d89e] <==
	I1124 04:08:28.651477       1 server_linux.go:53] "Using iptables proxy"
	I1124 04:08:28.748416       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 04:08:28.848808       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 04:08:28.848844       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 04:08:28.848932       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 04:08:28.932206       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 04:08:28.932255       1 server_linux.go:132] "Using iptables Proxier"
	I1124 04:08:28.945116       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 04:08:28.945450       1 server.go:527] "Version info" version="v1.34.1"
	I1124 04:08:28.945462       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:08:28.947472       1 config.go:200] "Starting service config controller"
	I1124 04:08:28.947487       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 04:08:28.947503       1 config.go:106] "Starting endpoint slice config controller"
	I1124 04:08:28.947510       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 04:08:28.947532       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 04:08:28.947536       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 04:08:28.948150       1 config.go:309] "Starting node config controller"
	I1124 04:08:28.948157       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 04:08:28.948163       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 04:08:29.047770       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 04:08:29.047814       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 04:08:29.047874       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [848b7a5b1960bf771106d3ade5f36482eb5247f3fdffae808cd1b74ec8b48cb5] <==
	I1124 04:09:24.868630       1 server_linux.go:53] "Using iptables proxy"
	I1124 04:09:26.704366       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 04:09:26.814527       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 04:09:26.814657       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 04:09:26.815712       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 04:09:26.928372       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 04:09:26.928499       1 server_linux.go:132] "Using iptables Proxier"
	I1124 04:09:26.937615       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 04:09:26.938003       1 server.go:527] "Version info" version="v1.34.1"
	I1124 04:09:26.938184       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:09:26.939503       1 config.go:200] "Starting service config controller"
	I1124 04:09:26.939557       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 04:09:26.939600       1 config.go:106] "Starting endpoint slice config controller"
	I1124 04:09:26.939627       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 04:09:26.939663       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 04:09:26.939689       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 04:09:26.940384       1 config.go:309] "Starting node config controller"
	I1124 04:09:26.943138       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 04:09:26.943192       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 04:09:27.040376       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 04:09:27.040488       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 04:09:27.040513       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c7f699f1cc0ef484fe9224d86d2fb5cdd924b5f1e89ed3e12a85d92c72cc378e] <==
	E1124 04:08:20.357634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 04:08:20.357696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 04:08:20.357748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 04:08:20.357822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 04:08:20.357879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 04:08:20.358105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 04:08:20.358225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 04:08:20.358271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 04:08:20.358287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 04:08:20.358302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 04:08:20.358360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 04:08:20.358414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 04:08:21.159907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 04:08:21.177663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 04:08:21.195203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 04:08:21.225886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 04:08:21.226598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 04:08:21.321629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 04:08:21.345920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1124 04:08:21.916038       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 04:09:14.463066       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1124 04:09:14.464099       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1124 04:09:14.471263       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1124 04:09:14.471360       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1124 04:09:14.473403       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e043bea1c710a5ce21da1af2f69f48cc7f408d94c02ad317ef742420e6047668] <==
	I1124 04:09:25.285813       1 serving.go:386] Generated self-signed cert in-memory
	I1124 04:09:27.167194       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 04:09:27.167319       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:09:27.179918       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 04:09:27.180146       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 04:09:27.180200       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 04:09:27.180246       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 04:09:27.182308       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 04:09:27.194519       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 04:09:27.190516       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 04:09:27.194780       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 04:09:27.280560       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1124 04:09:27.295293       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 04:09:27.295403       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 04:09:21 pause-396108 kubelet[1304]: I1124 04:09:21.870570    1304 scope.go:117] "RemoveContainer" containerID="5b712eafb8e889f1f03dcbca8e8cff4e20805686a79699de0f7ac21affd0b9f9"
	Nov 24 04:09:21 pause-396108 kubelet[1304]: E1124 04:09:21.871179    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-xfr6t\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fd71cd99-c8ae-4289-91db-9e0d7fe80820" pod="kube-system/coredns-66bc5c9577-xfr6t"
	Nov 24 04:09:21 pause-396108 kubelet[1304]: E1124 04:09:21.871463    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-396108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="645c5104c13988790f9502b276745a8a" pod="kube-system/etcd-pause-396108"
	Nov 24 04:09:21 pause-396108 kubelet[1304]: E1124 04:09:21.871714    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-396108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="01d4d8b7353208f37f9c78a2f5d85171" pod="kube-system/kube-scheduler-pause-396108"
	Nov 24 04:09:21 pause-396108 kubelet[1304]: E1124 04:09:21.871956    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-396108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="479c0426591e889826070894e4ec2fe6" pod="kube-system/kube-apiserver-pause-396108"
	Nov 24 04:09:21 pause-396108 kubelet[1304]: E1124 04:09:21.872206    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-396108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6f2a5d5024437d0bdcaef8a7380af89f" pod="kube-system/kube-controller-manager-pause-396108"
	Nov 24 04:09:21 pause-396108 kubelet[1304]: E1124 04:09:21.872453    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-scjq4\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="55cb7252-6c8b-4499-8353-ffca1a4f06d1" pod="kube-system/kube-proxy-scjq4"
	Nov 24 04:09:21 pause-396108 kubelet[1304]: I1124 04:09:21.929656    1304 scope.go:117] "RemoveContainer" containerID="c8a48af8745d6e7ed60d98860bbdd720ffd910fb5f4ca540179f0c1ded57c194"
	Nov 24 04:09:21 pause-396108 kubelet[1304]: E1124 04:09:21.930165    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-396108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="645c5104c13988790f9502b276745a8a" pod="kube-system/etcd-pause-396108"
	Nov 24 04:09:21 pause-396108 kubelet[1304]: E1124 04:09:21.930401    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-396108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="01d4d8b7353208f37f9c78a2f5d85171" pod="kube-system/kube-scheduler-pause-396108"
	Nov 24 04:09:21 pause-396108 kubelet[1304]: E1124 04:09:21.931024    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-396108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="479c0426591e889826070894e4ec2fe6" pod="kube-system/kube-apiserver-pause-396108"
	Nov 24 04:09:21 pause-396108 kubelet[1304]: E1124 04:09:21.931262    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-396108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6f2a5d5024437d0bdcaef8a7380af89f" pod="kube-system/kube-controller-manager-pause-396108"
	Nov 24 04:09:21 pause-396108 kubelet[1304]: E1124 04:09:21.931444    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-scjq4\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="55cb7252-6c8b-4499-8353-ffca1a4f06d1" pod="kube-system/kube-proxy-scjq4"
	Nov 24 04:09:21 pause-396108 kubelet[1304]: E1124 04:09:21.931647    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-mfqdh\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="bb8e0be7-54b6-4486-9171-829f7caa1732" pod="kube-system/kindnet-mfqdh"
	Nov 24 04:09:21 pause-396108 kubelet[1304]: E1124 04:09:21.931797    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-xfr6t\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fd71cd99-c8ae-4289-91db-9e0d7fe80820" pod="kube-system/coredns-66bc5c9577-xfr6t"
	Nov 24 04:09:22 pause-396108 kubelet[1304]: W1124 04:09:22.833460    1304 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 24 04:09:26 pause-396108 kubelet[1304]: E1124 04:09:26.456857    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-mfqdh\" is forbidden: User \"system:node:pause-396108\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-396108' and this object" podUID="bb8e0be7-54b6-4486-9171-829f7caa1732" pod="kube-system/kindnet-mfqdh"
	Nov 24 04:09:26 pause-396108 kubelet[1304]: E1124 04:09:26.457122    1304 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-396108\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-396108' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 24 04:09:26 pause-396108 kubelet[1304]: E1124 04:09:26.457145    1304 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-396108\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-396108' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 24 04:09:26 pause-396108 kubelet[1304]: E1124 04:09:26.457176    1304 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-396108\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-396108' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 24 04:09:26 pause-396108 kubelet[1304]: E1124 04:09:26.523522    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-xfr6t\" is forbidden: User \"system:node:pause-396108\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-396108' and this object" podUID="fd71cd99-c8ae-4289-91db-9e0d7fe80820" pod="kube-system/coredns-66bc5c9577-xfr6t"
	Nov 24 04:09:32 pause-396108 kubelet[1304]: W1124 04:09:32.855465    1304 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 24 04:09:42 pause-396108 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 04:09:42 pause-396108 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 04:09:42 pause-396108 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-396108 -n pause-396108
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-396108 -n pause-396108: exit status 2 (430.602064ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-396108 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-762702 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-762702 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (263.591636ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:13:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-762702 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-762702 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-762702 describe deploy/metrics-server -n kube-system: exit status 1 (94.058154ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-762702 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-762702
helpers_test.go:243: (dbg) docker inspect old-k8s-version-762702:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b9dfaaddc60d2719aecb1147663224db018847e7dac0caa2182ee474b875f47a",
	        "Created": "2025-11-24T04:12:22.608705618Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 469698,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T04:12:22.690736214Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fbb44bc62521f331457dff002aaa5e1e27856f9e53853b3b3ee62969be454028",
	        "ResolvConfPath": "/var/lib/docker/containers/b9dfaaddc60d2719aecb1147663224db018847e7dac0caa2182ee474b875f47a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b9dfaaddc60d2719aecb1147663224db018847e7dac0caa2182ee474b875f47a/hostname",
	        "HostsPath": "/var/lib/docker/containers/b9dfaaddc60d2719aecb1147663224db018847e7dac0caa2182ee474b875f47a/hosts",
	        "LogPath": "/var/lib/docker/containers/b9dfaaddc60d2719aecb1147663224db018847e7dac0caa2182ee474b875f47a/b9dfaaddc60d2719aecb1147663224db018847e7dac0caa2182ee474b875f47a-json.log",
	        "Name": "/old-k8s-version-762702",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-762702:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-762702",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b9dfaaddc60d2719aecb1147663224db018847e7dac0caa2182ee474b875f47a",
	                "LowerDir": "/var/lib/docker/overlay2/653c33f0be4a366cb5cc86ca2501e9ef033df8c8abee4cc8bc2eca215ba11542-init/diff:/var/lib/docker/overlay2/d0aa28be488ed1454a92ae6ce1d851cbc3e880fd8a322d63788b1835a346ec13/diff",
	                "MergedDir": "/var/lib/docker/overlay2/653c33f0be4a366cb5cc86ca2501e9ef033df8c8abee4cc8bc2eca215ba11542/merged",
	                "UpperDir": "/var/lib/docker/overlay2/653c33f0be4a366cb5cc86ca2501e9ef033df8c8abee4cc8bc2eca215ba11542/diff",
	                "WorkDir": "/var/lib/docker/overlay2/653c33f0be4a366cb5cc86ca2501e9ef033df8c8abee4cc8bc2eca215ba11542/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-762702",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-762702/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-762702",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-762702",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-762702",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4bba3adce1154b5edc7132457095bd48b9fe5e633470793d690ce0c0649904a9",
	            "SandboxKey": "/var/run/docker/netns/4bba3adce115",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-762702": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:b9:dc:52:7a:bf",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2839db71c04bcd656cafcc00851680b9c7cc53726d05c9804df0e7524d958ffa",
	                    "EndpointID": "856c08a3a49b26df763abdbb4477551d24730721c596c6d724a7e11c071318a0",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-762702",
	                        "b9dfaaddc60d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-762702 -n old-k8s-version-762702
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-762702 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-762702 logs -n 25: (1.1840159s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-778509 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo containerd config dump                                                                                                                                                                                                  │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo crio config                                                                                                                                                                                                             │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ delete  │ -p cilium-778509                                                                                                                                                                                                                              │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │ 24 Nov 25 04:10 UTC │
	│ start   │ -p force-systemd-env-400958 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-400958  │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │ 24 Nov 25 04:11 UTC │
	│ delete  │ -p kubernetes-upgrade-207884                                                                                                                                                                                                                  │ kubernetes-upgrade-207884 │ jenkins │ v1.37.0 │ 24 Nov 25 04:11 UTC │ 24 Nov 25 04:11 UTC │
	│ start   │ -p cert-expiration-918798 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-918798    │ jenkins │ v1.37.0 │ 24 Nov 25 04:11 UTC │ 24 Nov 25 04:11 UTC │
	│ delete  │ -p force-systemd-env-400958                                                                                                                                                                                                                   │ force-systemd-env-400958  │ jenkins │ v1.37.0 │ 24 Nov 25 04:11 UTC │ 24 Nov 25 04:11 UTC │
	│ start   │ -p cert-options-967682 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-967682       │ jenkins │ v1.37.0 │ 24 Nov 25 04:11 UTC │ 24 Nov 25 04:12 UTC │
	│ ssh     │ cert-options-967682 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-967682       │ jenkins │ v1.37.0 │ 24 Nov 25 04:12 UTC │ 24 Nov 25 04:12 UTC │
	│ ssh     │ -p cert-options-967682 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-967682       │ jenkins │ v1.37.0 │ 24 Nov 25 04:12 UTC │ 24 Nov 25 04:12 UTC │
	│ delete  │ -p cert-options-967682                                                                                                                                                                                                                        │ cert-options-967682       │ jenkins │ v1.37.0 │ 24 Nov 25 04:12 UTC │ 24 Nov 25 04:12 UTC │
	│ start   │ -p old-k8s-version-762702 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:12 UTC │ 24 Nov 25 04:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-762702 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 04:12:16
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 04:12:16.489509  469310 out.go:360] Setting OutFile to fd 1 ...
	I1124 04:12:16.490032  469310 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:12:16.490066  469310 out.go:374] Setting ErrFile to fd 2...
	I1124 04:12:16.490087  469310 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:12:16.490379  469310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 04:12:16.490918  469310 out.go:368] Setting JSON to false
	I1124 04:12:16.491849  469310 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10466,"bootTime":1763947071,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 04:12:16.491956  469310 start.go:143] virtualization:  
	I1124 04:12:16.495546  469310 out.go:179] * [old-k8s-version-762702] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 04:12:16.499845  469310 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 04:12:16.499948  469310 notify.go:221] Checking for updates...
	I1124 04:12:16.506039  469310 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 04:12:16.509130  469310 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:12:16.512218  469310 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	I1124 04:12:16.515350  469310 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 04:12:16.518644  469310 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 04:12:16.522166  469310 config.go:182] Loaded profile config "cert-expiration-918798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:12:16.522284  469310 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 04:12:16.547221  469310 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 04:12:16.547354  469310 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:12:16.604748  469310 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 04:12:16.595112196 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:12:16.604855  469310 docker.go:319] overlay module found
	I1124 04:12:16.608243  469310 out.go:179] * Using the docker driver based on user configuration
	I1124 04:12:16.611269  469310 start.go:309] selected driver: docker
	I1124 04:12:16.611291  469310 start.go:927] validating driver "docker" against <nil>
	I1124 04:12:16.611309  469310 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 04:12:16.612060  469310 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:12:16.672010  469310 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 04:12:16.662626585 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:12:16.672176  469310 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 04:12:16.672404  469310 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 04:12:16.675435  469310 out.go:179] * Using Docker driver with root privileges
	I1124 04:12:16.678446  469310 cni.go:84] Creating CNI manager for ""
	I1124 04:12:16.678548  469310 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:12:16.678559  469310 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 04:12:16.678657  469310 start.go:353] cluster config:
	{Name:old-k8s-version-762702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-762702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:12:16.681887  469310 out.go:179] * Starting "old-k8s-version-762702" primary control-plane node in "old-k8s-version-762702" cluster
	I1124 04:12:16.684711  469310 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 04:12:16.687634  469310 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 04:12:16.690556  469310 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 04:12:16.690611  469310 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1124 04:12:16.690621  469310 cache.go:65] Caching tarball of preloaded images
	I1124 04:12:16.690619  469310 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 04:12:16.690703  469310 preload.go:238] Found /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 04:12:16.690714  469310 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1124 04:12:16.690815  469310 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/config.json ...
	I1124 04:12:16.690839  469310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/config.json: {Name:mkbed4b41f7ff37df769b35727f2258d02521752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:12:16.715296  469310 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 04:12:16.715321  469310 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 04:12:16.715343  469310 cache.go:243] Successfully downloaded all kic artifacts
	I1124 04:12:16.715374  469310 start.go:360] acquireMachinesLock for old-k8s-version-762702: {Name:mk39e7bd6d63be24b0c5297d3d6b80f2dd18eb45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 04:12:16.715486  469310 start.go:364] duration metric: took 89.511µs to acquireMachinesLock for "old-k8s-version-762702"
	I1124 04:12:16.715517  469310 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-762702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-762702 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 04:12:16.715602  469310 start.go:125] createHost starting for "" (driver="docker")
	I1124 04:12:16.719002  469310 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 04:12:16.719386  469310 start.go:159] libmachine.API.Create for "old-k8s-version-762702" (driver="docker")
	I1124 04:12:16.719431  469310 client.go:173] LocalClient.Create starting
	I1124 04:12:16.719535  469310 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem
	I1124 04:12:16.719591  469310 main.go:143] libmachine: Decoding PEM data...
	I1124 04:12:16.719614  469310 main.go:143] libmachine: Parsing certificate...
	I1124 04:12:16.719700  469310 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem
	I1124 04:12:16.719731  469310 main.go:143] libmachine: Decoding PEM data...
	I1124 04:12:16.719747  469310 main.go:143] libmachine: Parsing certificate...
	I1124 04:12:16.720157  469310 cli_runner.go:164] Run: docker network inspect old-k8s-version-762702 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 04:12:16.736971  469310 cli_runner.go:211] docker network inspect old-k8s-version-762702 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 04:12:16.737068  469310 network_create.go:284] running [docker network inspect old-k8s-version-762702] to gather additional debugging logs...
	I1124 04:12:16.737091  469310 cli_runner.go:164] Run: docker network inspect old-k8s-version-762702
	W1124 04:12:16.755593  469310 cli_runner.go:211] docker network inspect old-k8s-version-762702 returned with exit code 1
	I1124 04:12:16.755619  469310 network_create.go:287] error running [docker network inspect old-k8s-version-762702]: docker network inspect old-k8s-version-762702: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-762702 not found
	I1124 04:12:16.755644  469310 network_create.go:289] output of [docker network inspect old-k8s-version-762702]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-762702 not found
	
	** /stderr **
	I1124 04:12:16.755741  469310 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 04:12:16.772042  469310 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-740fb099fccc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:52:7a:9c:b0:4d:41} reservation:<nil>}
	I1124 04:12:16.772439  469310 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4b0f25a7c590 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:53:b3:a1:55:1a} reservation:<nil>}
	I1124 04:12:16.772687  469310 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c1d995330d2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:83:d9:0c:83:10} reservation:<nil>}
	I1124 04:12:16.772987  469310 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e7d131e8d19a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:82:7e:42:05:0e:3d} reservation:<nil>}
	I1124 04:12:16.773455  469310 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a7c80}
	I1124 04:12:16.773480  469310 network_create.go:124] attempt to create docker network old-k8s-version-762702 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1124 04:12:16.773541  469310 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-762702 old-k8s-version-762702
	I1124 04:12:16.846229  469310 network_create.go:108] docker network old-k8s-version-762702 192.168.85.0/24 created
	I1124 04:12:16.846266  469310 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-762702" container
	I1124 04:12:16.846341  469310 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 04:12:16.864744  469310 cli_runner.go:164] Run: docker volume create old-k8s-version-762702 --label name.minikube.sigs.k8s.io=old-k8s-version-762702 --label created_by.minikube.sigs.k8s.io=true
	I1124 04:12:16.884538  469310 oci.go:103] Successfully created a docker volume old-k8s-version-762702
	I1124 04:12:16.884636  469310 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-762702-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-762702 --entrypoint /usr/bin/test -v old-k8s-version-762702:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 04:12:17.436479  469310 oci.go:107] Successfully prepared a docker volume old-k8s-version-762702
	I1124 04:12:17.436564  469310 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 04:12:17.436579  469310 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 04:12:17.436646  469310 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-762702:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 04:12:22.533492  469310 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-762702:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (5.096804561s)
	I1124 04:12:22.533526  469310 kic.go:203] duration metric: took 5.096944683s to extract preloaded images to volume ...
	W1124 04:12:22.533677  469310 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 04:12:22.533791  469310 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 04:12:22.593550  469310 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-762702 --name old-k8s-version-762702 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-762702 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-762702 --network old-k8s-version-762702 --ip 192.168.85.2 --volume old-k8s-version-762702:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 04:12:22.930692  469310 cli_runner.go:164] Run: docker container inspect old-k8s-version-762702 --format={{.State.Running}}
	I1124 04:12:22.956502  469310 cli_runner.go:164] Run: docker container inspect old-k8s-version-762702 --format={{.State.Status}}
	I1124 04:12:22.989875  469310 cli_runner.go:164] Run: docker exec old-k8s-version-762702 stat /var/lib/dpkg/alternatives/iptables
	I1124 04:12:23.049109  469310 oci.go:144] the created container "old-k8s-version-762702" has a running status.
	I1124 04:12:23.049145  469310 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-289526/.minikube/machines/old-k8s-version-762702/id_rsa...
	I1124 04:12:23.367804  469310 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-289526/.minikube/machines/old-k8s-version-762702/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 04:12:23.396995  469310 cli_runner.go:164] Run: docker container inspect old-k8s-version-762702 --format={{.State.Status}}
	I1124 04:12:23.421856  469310 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 04:12:23.421882  469310 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-762702 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 04:12:23.496699  469310 cli_runner.go:164] Run: docker container inspect old-k8s-version-762702 --format={{.State.Status}}
	I1124 04:12:23.522130  469310 machine.go:94] provisionDockerMachine start ...
	I1124 04:12:23.522219  469310 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:12:23.545324  469310 main.go:143] libmachine: Using SSH client type: native
	I1124 04:12:23.545669  469310 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33421 <nil> <nil>}
	I1124 04:12:23.545686  469310 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 04:12:23.546329  469310 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 04:12:26.698177  469310 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-762702
	
	I1124 04:12:26.698262  469310 ubuntu.go:182] provisioning hostname "old-k8s-version-762702"
	I1124 04:12:26.698357  469310 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:12:26.718809  469310 main.go:143] libmachine: Using SSH client type: native
	I1124 04:12:26.719115  469310 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33421 <nil> <nil>}
	I1124 04:12:26.719126  469310 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-762702 && echo "old-k8s-version-762702" | sudo tee /etc/hostname
	I1124 04:12:26.877414  469310 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-762702
	
	I1124 04:12:26.877627  469310 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:12:26.898739  469310 main.go:143] libmachine: Using SSH client type: native
	I1124 04:12:26.899078  469310 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33421 <nil> <nil>}
	I1124 04:12:26.899100  469310 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-762702' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-762702/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-762702' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 04:12:27.050626  469310 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 04:12:27.050655  469310 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-289526/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-289526/.minikube}
	I1124 04:12:27.050676  469310 ubuntu.go:190] setting up certificates
	I1124 04:12:27.050687  469310 provision.go:84] configureAuth start
	I1124 04:12:27.050747  469310 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-762702
	I1124 04:12:27.067789  469310 provision.go:143] copyHostCerts
	I1124 04:12:27.067860  469310 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem, removing ...
	I1124 04:12:27.067874  469310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem
	I1124 04:12:27.067963  469310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem (1082 bytes)
	I1124 04:12:27.068054  469310 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem, removing ...
	I1124 04:12:27.068065  469310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem
	I1124 04:12:27.068092  469310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem (1123 bytes)
	I1124 04:12:27.068148  469310 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem, removing ...
	I1124 04:12:27.068157  469310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem
	I1124 04:12:27.068184  469310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem (1675 bytes)
	I1124 04:12:27.068235  469310 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-762702 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-762702]
	I1124 04:12:27.332990  469310 provision.go:177] copyRemoteCerts
	I1124 04:12:27.333058  469310 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 04:12:27.333114  469310 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:12:27.350778  469310 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/old-k8s-version-762702/id_rsa Username:docker}
	I1124 04:12:27.454085  469310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 04:12:27.471454  469310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1124 04:12:27.489134  469310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 04:12:27.509240  469310 provision.go:87] duration metric: took 458.538997ms to configureAuth
	I1124 04:12:27.509279  469310 ubuntu.go:206] setting minikube options for container-runtime
	I1124 04:12:27.509493  469310 config.go:182] Loaded profile config "old-k8s-version-762702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 04:12:27.509606  469310 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:12:27.526923  469310 main.go:143] libmachine: Using SSH client type: native
	I1124 04:12:27.527238  469310 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33421 <nil> <nil>}
	I1124 04:12:27.527256  469310 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 04:12:27.826333  469310 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 04:12:27.826358  469310 machine.go:97] duration metric: took 4.304203587s to provisionDockerMachine
	I1124 04:12:27.826369  469310 client.go:176] duration metric: took 11.106928342s to LocalClient.Create
	I1124 04:12:27.826389  469310 start.go:167] duration metric: took 11.107005766s to libmachine.API.Create "old-k8s-version-762702"
	I1124 04:12:27.826398  469310 start.go:293] postStartSetup for "old-k8s-version-762702" (driver="docker")
	I1124 04:12:27.826423  469310 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 04:12:27.826526  469310 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 04:12:27.826572  469310 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:12:27.844887  469310 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/old-k8s-version-762702/id_rsa Username:docker}
	I1124 04:12:27.947078  469310 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 04:12:27.950700  469310 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 04:12:27.950725  469310 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 04:12:27.950737  469310 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/addons for local assets ...
	I1124 04:12:27.950792  469310 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/files for local assets ...
	I1124 04:12:27.950873  469310 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem -> 2913892.pem in /etc/ssl/certs
	I1124 04:12:27.950970  469310 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 04:12:27.959664  469310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:12:27.983004  469310 start.go:296] duration metric: took 156.591001ms for postStartSetup
	I1124 04:12:27.983380  469310 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-762702
	I1124 04:12:28.005830  469310 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/config.json ...
	I1124 04:12:28.006135  469310 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 04:12:28.006197  469310 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:12:28.025450  469310 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/old-k8s-version-762702/id_rsa Username:docker}
	I1124 04:12:28.140465  469310 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 04:12:28.145361  469310 start.go:128] duration metric: took 11.429744153s to createHost
	I1124 04:12:28.145440  469310 start.go:83] releasing machines lock for "old-k8s-version-762702", held for 11.429938569s
	I1124 04:12:28.145541  469310 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-762702
	I1124 04:12:28.162822  469310 ssh_runner.go:195] Run: cat /version.json
	I1124 04:12:28.162870  469310 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:12:28.163119  469310 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 04:12:28.163169  469310 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:12:28.189194  469310 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/old-k8s-version-762702/id_rsa Username:docker}
	I1124 04:12:28.195379  469310 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/old-k8s-version-762702/id_rsa Username:docker}
	I1124 04:12:28.290186  469310 ssh_runner.go:195] Run: systemctl --version
	I1124 04:12:28.386815  469310 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 04:12:28.423207  469310 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 04:12:28.428123  469310 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 04:12:28.428201  469310 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 04:12:28.456285  469310 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 04:12:28.456360  469310 start.go:496] detecting cgroup driver to use...
	I1124 04:12:28.456408  469310 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 04:12:28.456489  469310 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 04:12:28.476449  469310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 04:12:28.489005  469310 docker.go:218] disabling cri-docker service (if available) ...
	I1124 04:12:28.489078  469310 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 04:12:28.507653  469310 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 04:12:28.527194  469310 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 04:12:28.656733  469310 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 04:12:28.793188  469310 docker.go:234] disabling docker service ...
	I1124 04:12:28.793309  469310 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 04:12:28.814812  469310 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 04:12:28.828969  469310 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 04:12:28.948478  469310 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 04:12:29.077649  469310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 04:12:29.091630  469310 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 04:12:29.107557  469310 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1124 04:12:29.107702  469310 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:12:29.117371  469310 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 04:12:29.117442  469310 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:12:29.127088  469310 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:12:29.136329  469310 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:12:29.145253  469310 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 04:12:29.153780  469310 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:12:29.163074  469310 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:12:29.181942  469310 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:12:29.192830  469310 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 04:12:29.200930  469310 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 04:12:29.208524  469310 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:12:29.332916  469310 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 04:12:29.511353  469310 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 04:12:29.511433  469310 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 04:12:29.515801  469310 start.go:564] Will wait 60s for crictl version
	I1124 04:12:29.515869  469310 ssh_runner.go:195] Run: which crictl
	I1124 04:12:29.519705  469310 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 04:12:29.544954  469310 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 04:12:29.545055  469310 ssh_runner.go:195] Run: crio --version
	I1124 04:12:29.572904  469310 ssh_runner.go:195] Run: crio --version
	I1124 04:12:29.606749  469310 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1124 04:12:29.609572  469310 cli_runner.go:164] Run: docker network inspect old-k8s-version-762702 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 04:12:29.626339  469310 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 04:12:29.630244  469310 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:12:29.640067  469310 kubeadm.go:884] updating cluster {Name:old-k8s-version-762702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-762702 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 04:12:29.640196  469310 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 04:12:29.640256  469310 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:12:29.672489  469310 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:12:29.672515  469310 crio.go:433] Images already preloaded, skipping extraction
	I1124 04:12:29.672569  469310 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:12:29.707672  469310 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:12:29.707698  469310 cache_images.go:86] Images are preloaded, skipping loading
	I1124 04:12:29.707707  469310 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1124 04:12:29.708145  469310 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-762702 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-762702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 04:12:29.708258  469310 ssh_runner.go:195] Run: crio config
	I1124 04:12:29.767232  469310 cni.go:84] Creating CNI manager for ""
	I1124 04:12:29.767257  469310 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:12:29.767278  469310 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 04:12:29.767304  469310 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-762702 NodeName:old-k8s-version-762702 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 04:12:29.767480  469310 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-762702"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 04:12:29.767556  469310 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1124 04:12:29.775360  469310 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 04:12:29.775441  469310 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 04:12:29.783139  469310 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1124 04:12:29.795837  469310 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 04:12:29.809079  469310 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1124 04:12:29.821940  469310 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 04:12:29.825529  469310 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:12:29.835197  469310 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:12:29.949772  469310 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:12:29.966768  469310 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702 for IP: 192.168.85.2
	I1124 04:12:29.966789  469310 certs.go:195] generating shared ca certs ...
	I1124 04:12:29.966807  469310 certs.go:227] acquiring lock for ca certs: {Name:mk13d1a72b5c7901cf99d88e558f62c6b8512807 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:12:29.966963  469310 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key
	I1124 04:12:29.967019  469310 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key
	I1124 04:12:29.967030  469310 certs.go:257] generating profile certs ...
	I1124 04:12:29.967086  469310 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/client.key
	I1124 04:12:29.967102  469310 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/client.crt with IP's: []
	I1124 04:12:30.205449  469310 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/client.crt ...
	I1124 04:12:30.205485  469310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/client.crt: {Name:mk3375e634cbe498d44fb3d9af81e5ce6e740050 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:12:30.205711  469310 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/client.key ...
	I1124 04:12:30.205728  469310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/client.key: {Name:mk4d9c434598eae8629930b8f7bfa4b6bb6fbd99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:12:30.205826  469310 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/apiserver.key.8fa10a20
	I1124 04:12:30.205847  469310 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/apiserver.crt.8fa10a20 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 04:12:30.655610  469310 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/apiserver.crt.8fa10a20 ...
	I1124 04:12:30.655643  469310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/apiserver.crt.8fa10a20: {Name:mkc6eae69dcf55458c91ce466240c96e14eb5102 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:12:30.655832  469310 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/apiserver.key.8fa10a20 ...
	I1124 04:12:30.655847  469310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/apiserver.key.8fa10a20: {Name:mkee58fd7ebc8a05ad960cbf2a9b504520721741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:12:30.655932  469310 certs.go:382] copying /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/apiserver.crt.8fa10a20 -> /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/apiserver.crt
	I1124 04:12:30.656011  469310 certs.go:386] copying /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/apiserver.key.8fa10a20 -> /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/apiserver.key
	I1124 04:12:30.656073  469310 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/proxy-client.key
	I1124 04:12:30.656090  469310 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/proxy-client.crt with IP's: []
	I1124 04:12:30.958650  469310 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/proxy-client.crt ...
	I1124 04:12:30.958683  469310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/proxy-client.crt: {Name:mkd2c2bee6e3c4b706298e448d7f51caec28d486 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:12:30.958868  469310 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/proxy-client.key ...
	I1124 04:12:30.958879  469310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/proxy-client.key: {Name:mk5051c6363fa508a4cf0fa4ef9f0e5d316a3ecd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:12:30.959054  469310 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem (1338 bytes)
	W1124 04:12:30.959093  469310 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389_empty.pem, impossibly tiny 0 bytes
	I1124 04:12:30.959103  469310 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 04:12:30.959132  469310 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem (1082 bytes)
	I1124 04:12:30.959157  469310 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem (1123 bytes)
	I1124 04:12:30.959181  469310 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem (1675 bytes)
	I1124 04:12:30.959225  469310 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:12:30.959865  469310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 04:12:30.983080  469310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 04:12:31.004673  469310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 04:12:31.025369  469310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 04:12:31.047480  469310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1124 04:12:31.066527  469310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 04:12:31.085137  469310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 04:12:31.104651  469310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 04:12:31.124379  469310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /usr/share/ca-certificates/2913892.pem (1708 bytes)
	I1124 04:12:31.145735  469310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 04:12:31.165802  469310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem --> /usr/share/ca-certificates/291389.pem (1338 bytes)
	I1124 04:12:31.184210  469310 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 04:12:31.197938  469310 ssh_runner.go:195] Run: openssl version
	I1124 04:12:31.204210  469310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2913892.pem && ln -fs /usr/share/ca-certificates/2913892.pem /etc/ssl/certs/2913892.pem"
	I1124 04:12:31.213147  469310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2913892.pem
	I1124 04:12:31.216830  469310 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 03:20 /usr/share/ca-certificates/2913892.pem
	I1124 04:12:31.216942  469310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2913892.pem
	I1124 04:12:31.257996  469310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2913892.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 04:12:31.267700  469310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 04:12:31.276086  469310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:12:31.279866  469310 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 03:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:12:31.279955  469310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:12:31.320894  469310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 04:12:31.329012  469310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291389.pem && ln -fs /usr/share/ca-certificates/291389.pem /etc/ssl/certs/291389.pem"
	I1124 04:12:31.337036  469310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291389.pem
	I1124 04:12:31.341043  469310 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 03:20 /usr/share/ca-certificates/291389.pem
	I1124 04:12:31.341127  469310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291389.pem
	I1124 04:12:31.382240  469310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291389.pem /etc/ssl/certs/51391683.0"
	I1124 04:12:31.390544  469310 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 04:12:31.393862  469310 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 04:12:31.393928  469310 kubeadm.go:401] StartCluster: {Name:old-k8s-version-762702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-762702 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:12:31.394006  469310 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 04:12:31.394067  469310 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 04:12:31.424583  469310 cri.go:89] found id: ""
	I1124 04:12:31.424663  469310 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 04:12:31.432605  469310 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 04:12:31.440599  469310 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 04:12:31.440716  469310 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 04:12:31.448815  469310 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 04:12:31.448887  469310 kubeadm.go:158] found existing configuration files:
	
	I1124 04:12:31.448958  469310 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 04:12:31.456867  469310 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 04:12:31.456948  469310 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 04:12:31.464696  469310 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 04:12:31.472347  469310 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 04:12:31.472418  469310 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 04:12:31.480108  469310 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 04:12:31.488065  469310 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 04:12:31.488175  469310 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 04:12:31.495546  469310 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 04:12:31.503593  469310 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 04:12:31.503713  469310 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 04:12:31.511167  469310 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 04:12:31.572746  469310 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1124 04:12:31.573173  469310 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 04:12:31.631295  469310 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 04:12:31.631452  469310 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 04:12:31.631537  469310 kubeadm.go:319] OS: Linux
	I1124 04:12:31.631617  469310 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 04:12:31.631728  469310 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 04:12:31.631803  469310 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 04:12:31.631881  469310 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 04:12:31.631964  469310 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 04:12:31.632047  469310 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 04:12:31.632133  469310 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 04:12:31.632241  469310 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 04:12:31.632333  469310 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 04:12:31.736916  469310 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 04:12:31.737075  469310 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 04:12:31.737171  469310 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1124 04:12:31.886533  469310 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 04:12:31.889817  469310 out.go:252]   - Generating certificates and keys ...
	I1124 04:12:31.889921  469310 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 04:12:31.890001  469310 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 04:12:32.691146  469310 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 04:12:33.376637  469310 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 04:12:33.981361  469310 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 04:12:34.392141  469310 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 04:12:34.893610  469310 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 04:12:34.893969  469310 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-762702] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 04:12:35.365158  469310 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 04:12:35.365551  469310 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-762702] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 04:12:35.645191  469310 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 04:12:36.398861  469310 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 04:12:37.021941  469310 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 04:12:37.022233  469310 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 04:12:37.244592  469310 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 04:12:37.463492  469310 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 04:12:37.873833  469310 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 04:12:38.035548  469310 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 04:12:38.036280  469310 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 04:12:38.039264  469310 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 04:12:38.042800  469310 out.go:252]   - Booting up control plane ...
	I1124 04:12:38.042915  469310 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 04:12:38.042999  469310 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 04:12:38.043072  469310 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 04:12:38.063635  469310 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 04:12:38.066430  469310 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 04:12:38.066547  469310 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 04:12:38.202521  469310 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1124 04:12:46.206622  469310 kubeadm.go:319] [apiclient] All control plane components are healthy after 8.004431 seconds
	I1124 04:12:46.206763  469310 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 04:12:46.234934  469310 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 04:12:46.767087  469310 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 04:12:46.767352  469310 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-762702 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 04:12:47.280005  469310 kubeadm.go:319] [bootstrap-token] Using token: 7wb6am.037jgjbv9dujih75
	I1124 04:12:47.282962  469310 out.go:252]   - Configuring RBAC rules ...
	I1124 04:12:47.283109  469310 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 04:12:47.288321  469310 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 04:12:47.297791  469310 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 04:12:47.301869  469310 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 04:12:47.306594  469310 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 04:12:47.313129  469310 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 04:12:47.328141  469310 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 04:12:47.634615  469310 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 04:12:47.724456  469310 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 04:12:47.726014  469310 kubeadm.go:319] 
	I1124 04:12:47.726090  469310 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 04:12:47.726103  469310 kubeadm.go:319] 
	I1124 04:12:47.726177  469310 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 04:12:47.726187  469310 kubeadm.go:319] 
	I1124 04:12:47.726211  469310 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 04:12:47.726689  469310 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 04:12:47.726748  469310 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 04:12:47.726765  469310 kubeadm.go:319] 
	I1124 04:12:47.726818  469310 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 04:12:47.726827  469310 kubeadm.go:319] 
	I1124 04:12:47.726872  469310 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 04:12:47.726879  469310 kubeadm.go:319] 
	I1124 04:12:47.726928  469310 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 04:12:47.727003  469310 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 04:12:47.727071  469310 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 04:12:47.727078  469310 kubeadm.go:319] 
	I1124 04:12:47.727352  469310 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 04:12:47.727438  469310 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 04:12:47.727447  469310 kubeadm.go:319] 
	I1124 04:12:47.727701  469310 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 7wb6am.037jgjbv9dujih75 \
	I1124 04:12:47.727808  469310 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2194168f8292dc0469a2699229b86460c4906ab7e633f8753935c94ddff37855 \
	I1124 04:12:47.727992  469310 kubeadm.go:319] 	--control-plane 
	I1124 04:12:47.728009  469310 kubeadm.go:319] 
	I1124 04:12:47.728252  469310 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 04:12:47.728263  469310 kubeadm.go:319] 
	I1124 04:12:47.728521  469310 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 7wb6am.037jgjbv9dujih75 \
	I1124 04:12:47.728828  469310 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2194168f8292dc0469a2699229b86460c4906ab7e633f8753935c94ddff37855 
	I1124 04:12:47.736626  469310 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 04:12:47.736759  469310 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 04:12:47.736783  469310 cni.go:84] Creating CNI manager for ""
	I1124 04:12:47.736792  469310 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:12:47.742091  469310 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 04:12:47.745122  469310 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 04:12:47.749276  469310 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1124 04:12:47.749300  469310 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 04:12:47.769215  469310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 04:12:48.792810  469310 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.023554028s)
	I1124 04:12:48.792847  469310 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 04:12:48.792965  469310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:12:48.793031  469310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-762702 minikube.k8s.io/updated_at=2025_11_24T04_12_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=old-k8s-version-762702 minikube.k8s.io/primary=true
	I1124 04:12:48.985710  469310 ops.go:34] apiserver oom_adj: -16
	I1124 04:12:48.985813  469310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:12:49.485947  469310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:12:49.986190  469310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:12:50.485942  469310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:12:50.986525  469310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:12:51.486394  469310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:12:51.986397  469310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:12:52.486755  469310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:12:52.985950  469310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:12:53.485921  469310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:12:53.986575  469310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:12:54.485959  469310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:12:54.986710  469310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:12:55.486544  469310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:12:55.986000  469310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:12:56.486613  469310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:12:56.985973  469310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:12:57.486538  469310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:12:57.986774  469310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:12:58.485906  469310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:12:58.986109  469310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:12:59.486527  469310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:12:59.986748  469310 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:13:00.458284  469310 kubeadm.go:1114] duration metric: took 11.665366515s to wait for elevateKubeSystemPrivileges
	I1124 04:13:00.458324  469310 kubeadm.go:403] duration metric: took 29.064400853s to StartCluster
	I1124 04:13:00.458343  469310 settings.go:142] acquiring lock: {Name:mkbb4fe81234aed150705ba63a85f83232f71688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:13:00.458416  469310 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:13:00.459532  469310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/kubeconfig: {Name:mkb927d54c1c5489b1562496c31c4a11f46f8c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:13:00.459792  469310 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 04:13:00.459930  469310 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 04:13:00.460213  469310 config.go:182] Loaded profile config "old-k8s-version-762702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 04:13:00.460260  469310 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 04:13:00.460335  469310 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-762702"
	I1124 04:13:00.460353  469310 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-762702"
	I1124 04:13:00.460383  469310 host.go:66] Checking if "old-k8s-version-762702" exists ...
	I1124 04:13:00.461287  469310 cli_runner.go:164] Run: docker container inspect old-k8s-version-762702 --format={{.State.Status}}
	I1124 04:13:00.461455  469310 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-762702"
	I1124 04:13:00.461479  469310 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-762702"
	I1124 04:13:00.461748  469310 cli_runner.go:164] Run: docker container inspect old-k8s-version-762702 --format={{.State.Status}}
	I1124 04:13:00.464142  469310 out.go:179] * Verifying Kubernetes components...
	I1124 04:13:00.470478  469310 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:13:00.505721  469310 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-762702"
	I1124 04:13:00.510325  469310 host.go:66] Checking if "old-k8s-version-762702" exists ...
	I1124 04:13:00.510914  469310 cli_runner.go:164] Run: docker container inspect old-k8s-version-762702 --format={{.State.Status}}
	I1124 04:13:00.514098  469310 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 04:13:00.517087  469310 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:13:00.517115  469310 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 04:13:00.517192  469310 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:13:00.541767  469310 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 04:13:00.541793  469310 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 04:13:00.541864  469310 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:13:00.569836  469310 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/old-k8s-version-762702/id_rsa Username:docker}
	I1124 04:13:00.580256  469310 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/old-k8s-version-762702/id_rsa Username:docker}
	I1124 04:13:00.842342  469310 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 04:13:00.852408  469310 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 04:13:00.852584  469310 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:13:00.967907  469310 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:13:01.813884  469310 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-762702" to be "Ready" ...
	I1124 04:13:01.814191  469310 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 04:13:02.194404  469310 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.226406507s)
	I1124 04:13:02.197915  469310 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1124 04:13:02.200911  469310 addons.go:530] duration metric: took 1.740639885s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1124 04:13:02.319645  469310 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-762702" context rescaled to 1 replicas
	W1124 04:13:03.817777  469310 node_ready.go:57] node "old-k8s-version-762702" has "Ready":"False" status (will retry)
	W1124 04:13:05.824105  469310 node_ready.go:57] node "old-k8s-version-762702" has "Ready":"False" status (will retry)
	W1124 04:13:08.317458  469310 node_ready.go:57] node "old-k8s-version-762702" has "Ready":"False" status (will retry)
	W1124 04:13:10.317837  469310 node_ready.go:57] node "old-k8s-version-762702" has "Ready":"False" status (will retry)
	W1124 04:13:12.817283  469310 node_ready.go:57] node "old-k8s-version-762702" has "Ready":"False" status (will retry)
	I1124 04:13:14.817356  469310 node_ready.go:49] node "old-k8s-version-762702" is "Ready"
	I1124 04:13:14.817383  469310 node_ready.go:38] duration metric: took 13.003425922s for node "old-k8s-version-762702" to be "Ready" ...
	I1124 04:13:14.817396  469310 api_server.go:52] waiting for apiserver process to appear ...
	I1124 04:13:14.817449  469310 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 04:13:14.838968  469310 api_server.go:72] duration metric: took 14.379136943s to wait for apiserver process to appear ...
	I1124 04:13:14.838991  469310 api_server.go:88] waiting for apiserver healthz status ...
	I1124 04:13:14.839008  469310 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 04:13:14.853546  469310 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 04:13:14.856689  469310 api_server.go:141] control plane version: v1.28.0
	I1124 04:13:14.856715  469310 api_server.go:131] duration metric: took 17.717972ms to wait for apiserver health ...
	I1124 04:13:14.856724  469310 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 04:13:14.882393  469310 system_pods.go:59] 8 kube-system pods found
	I1124 04:13:14.882431  469310 system_pods.go:61] "coredns-5dd5756b68-c5hgr" [7d0b287f-b2e8-461f-abf4-71700b66caf8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:13:14.882438  469310 system_pods.go:61] "etcd-old-k8s-version-762702" [62d0f56d-8e43-47b5-baf7-2af95f42cd81] Running
	I1124 04:13:14.882444  469310 system_pods.go:61] "kindnet-lkhzw" [db06bd2a-7e8a-49e3-a17f-62b681f600d1] Running
	I1124 04:13:14.882448  469310 system_pods.go:61] "kube-apiserver-old-k8s-version-762702" [efc26447-b9f1-4aa7-a2b8-e2ef56674415] Running
	I1124 04:13:14.882543  469310 system_pods.go:61] "kube-controller-manager-old-k8s-version-762702" [9817fe2f-c899-4ef9-8e2f-c0b22566b389] Running
	I1124 04:13:14.882549  469310 system_pods.go:61] "kube-proxy-7ml4n" [1ed410af-141e-4197-9a5c-6900dc8e35e6] Running
	I1124 04:13:14.882552  469310 system_pods.go:61] "kube-scheduler-old-k8s-version-762702" [e1a7d08c-4e60-4f84-a997-3baef7354877] Running
	I1124 04:13:14.882558  469310 system_pods.go:61] "storage-provisioner" [8af39921-2789-4cc5-974a-89f0667a6e47] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 04:13:14.882564  469310 system_pods.go:74] duration metric: took 25.835303ms to wait for pod list to return data ...
	I1124 04:13:14.882573  469310 default_sa.go:34] waiting for default service account to be created ...
	I1124 04:13:14.892813  469310 default_sa.go:45] found service account: "default"
	I1124 04:13:14.892888  469310 default_sa.go:55] duration metric: took 10.307852ms for default service account to be created ...
	I1124 04:13:14.892913  469310 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 04:13:14.900101  469310 system_pods.go:86] 8 kube-system pods found
	I1124 04:13:14.900136  469310 system_pods.go:89] "coredns-5dd5756b68-c5hgr" [7d0b287f-b2e8-461f-abf4-71700b66caf8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:13:14.900142  469310 system_pods.go:89] "etcd-old-k8s-version-762702" [62d0f56d-8e43-47b5-baf7-2af95f42cd81] Running
	I1124 04:13:14.900148  469310 system_pods.go:89] "kindnet-lkhzw" [db06bd2a-7e8a-49e3-a17f-62b681f600d1] Running
	I1124 04:13:14.900155  469310 system_pods.go:89] "kube-apiserver-old-k8s-version-762702" [efc26447-b9f1-4aa7-a2b8-e2ef56674415] Running
	I1124 04:13:14.900159  469310 system_pods.go:89] "kube-controller-manager-old-k8s-version-762702" [9817fe2f-c899-4ef9-8e2f-c0b22566b389] Running
	I1124 04:13:14.900163  469310 system_pods.go:89] "kube-proxy-7ml4n" [1ed410af-141e-4197-9a5c-6900dc8e35e6] Running
	I1124 04:13:14.900170  469310 system_pods.go:89] "kube-scheduler-old-k8s-version-762702" [e1a7d08c-4e60-4f84-a997-3baef7354877] Running
	I1124 04:13:14.900177  469310 system_pods.go:89] "storage-provisioner" [8af39921-2789-4cc5-974a-89f0667a6e47] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 04:13:14.900195  469310 retry.go:31] will retry after 238.278078ms: missing components: kube-dns
	I1124 04:13:15.142230  469310 system_pods.go:86] 8 kube-system pods found
	I1124 04:13:15.142270  469310 system_pods.go:89] "coredns-5dd5756b68-c5hgr" [7d0b287f-b2e8-461f-abf4-71700b66caf8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:13:15.142278  469310 system_pods.go:89] "etcd-old-k8s-version-762702" [62d0f56d-8e43-47b5-baf7-2af95f42cd81] Running
	I1124 04:13:15.142283  469310 system_pods.go:89] "kindnet-lkhzw" [db06bd2a-7e8a-49e3-a17f-62b681f600d1] Running
	I1124 04:13:15.142288  469310 system_pods.go:89] "kube-apiserver-old-k8s-version-762702" [efc26447-b9f1-4aa7-a2b8-e2ef56674415] Running
	I1124 04:13:15.142293  469310 system_pods.go:89] "kube-controller-manager-old-k8s-version-762702" [9817fe2f-c899-4ef9-8e2f-c0b22566b389] Running
	I1124 04:13:15.142297  469310 system_pods.go:89] "kube-proxy-7ml4n" [1ed410af-141e-4197-9a5c-6900dc8e35e6] Running
	I1124 04:13:15.142302  469310 system_pods.go:89] "kube-scheduler-old-k8s-version-762702" [e1a7d08c-4e60-4f84-a997-3baef7354877] Running
	I1124 04:13:15.142305  469310 system_pods.go:89] "storage-provisioner" [8af39921-2789-4cc5-974a-89f0667a6e47] Running
	I1124 04:13:15.142313  469310 system_pods.go:126] duration metric: took 249.381739ms to wait for k8s-apps to be running ...
	I1124 04:13:15.142325  469310 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 04:13:15.142389  469310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:13:15.158623  469310 system_svc.go:56] duration metric: took 16.286806ms WaitForService to wait for kubelet
	I1124 04:13:15.158707  469310 kubeadm.go:587] duration metric: took 14.698880148s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 04:13:15.158743  469310 node_conditions.go:102] verifying NodePressure condition ...
	I1124 04:13:15.161613  469310 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 04:13:15.161647  469310 node_conditions.go:123] node cpu capacity is 2
	I1124 04:13:15.161662  469310 node_conditions.go:105] duration metric: took 2.90633ms to run NodePressure ...
	I1124 04:13:15.161677  469310 start.go:242] waiting for startup goroutines ...
	I1124 04:13:15.161684  469310 start.go:247] waiting for cluster config update ...
	I1124 04:13:15.161696  469310 start.go:256] writing updated cluster config ...
	I1124 04:13:15.161989  469310 ssh_runner.go:195] Run: rm -f paused
	I1124 04:13:15.165940  469310 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 04:13:15.183108  469310 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-c5hgr" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:13:16.190406  469310 pod_ready.go:94] pod "coredns-5dd5756b68-c5hgr" is "Ready"
	I1124 04:13:16.190430  469310 pod_ready.go:86] duration metric: took 1.007295596s for pod "coredns-5dd5756b68-c5hgr" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:13:16.193718  469310 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-762702" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:13:16.199333  469310 pod_ready.go:94] pod "etcd-old-k8s-version-762702" is "Ready"
	I1124 04:13:16.199383  469310 pod_ready.go:86] duration metric: took 5.635369ms for pod "etcd-old-k8s-version-762702" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:13:16.202872  469310 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-762702" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:13:16.207835  469310 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-762702" is "Ready"
	I1124 04:13:16.207866  469310 pod_ready.go:86] duration metric: took 4.965311ms for pod "kube-apiserver-old-k8s-version-762702" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:13:16.211084  469310 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-762702" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:13:16.387255  469310 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-762702" is "Ready"
	I1124 04:13:16.387293  469310 pod_ready.go:86] duration metric: took 176.182037ms for pod "kube-controller-manager-old-k8s-version-762702" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:13:16.587829  469310 pod_ready.go:83] waiting for pod "kube-proxy-7ml4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:13:16.987154  469310 pod_ready.go:94] pod "kube-proxy-7ml4n" is "Ready"
	I1124 04:13:16.987180  469310 pod_ready.go:86] duration metric: took 399.321188ms for pod "kube-proxy-7ml4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:13:17.186986  469310 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-762702" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:13:17.586334  469310 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-762702" is "Ready"
	I1124 04:13:17.586373  469310 pod_ready.go:86] duration metric: took 399.315007ms for pod "kube-scheduler-old-k8s-version-762702" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:13:17.586388  469310 pod_ready.go:40] duration metric: took 2.42039603s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 04:13:17.642766  469310 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1124 04:13:17.645847  469310 out.go:203] 
	W1124 04:13:17.648813  469310 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 04:13:17.651748  469310 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 04:13:17.655619  469310 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-762702" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 04:13:14 old-k8s-version-762702 crio[840]: time="2025-11-24T04:13:14.918763427Z" level=info msg="Created container 6bbd9b90a3fb0989d1bb9b5b3eb9b876d89595765abe0a1314e0113edf1a6bc8: kube-system/coredns-5dd5756b68-c5hgr/coredns" id=4f440aec-3766-4efe-bd12-6a536514f281 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:13:14 old-k8s-version-762702 crio[840]: time="2025-11-24T04:13:14.922230626Z" level=info msg="Starting container: 6bbd9b90a3fb0989d1bb9b5b3eb9b876d89595765abe0a1314e0113edf1a6bc8" id=9581138f-97d3-44a1-b1e2-973fdd47cbdc name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:13:14 old-k8s-version-762702 crio[840]: time="2025-11-24T04:13:14.926137642Z" level=info msg="Started container" PID=1934 containerID=6bbd9b90a3fb0989d1bb9b5b3eb9b876d89595765abe0a1314e0113edf1a6bc8 description=kube-system/coredns-5dd5756b68-c5hgr/coredns id=9581138f-97d3-44a1-b1e2-973fdd47cbdc name=/runtime.v1.RuntimeService/StartContainer sandboxID=905335a3019f28f3bbc87bdf7dfafdf212cf95a3ab26bdb0dce6fda28d6413fe
	Nov 24 04:13:18 old-k8s-version-762702 crio[840]: time="2025-11-24T04:13:18.17156059Z" level=info msg="Running pod sandbox: default/busybox/POD" id=d736b349-df0a-4f30-a257-e17ad6567fe5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 04:13:18 old-k8s-version-762702 crio[840]: time="2025-11-24T04:13:18.171639918Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:13:18 old-k8s-version-762702 crio[840]: time="2025-11-24T04:13:18.17860273Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3becf06db1e84ff45c2e1419c0f0650c59b644fba2063935f3cf18a44638fd85 UID:9b6392ee-0350-4790-80de-baef7e6db4f3 NetNS:/var/run/netns/a0069963-b12a-4c08-9634-535cbb626b52 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d960}] Aliases:map[]}"
	Nov 24 04:13:18 old-k8s-version-762702 crio[840]: time="2025-11-24T04:13:18.178653102Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 24 04:13:18 old-k8s-version-762702 crio[840]: time="2025-11-24T04:13:18.195176317Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3becf06db1e84ff45c2e1419c0f0650c59b644fba2063935f3cf18a44638fd85 UID:9b6392ee-0350-4790-80de-baef7e6db4f3 NetNS:/var/run/netns/a0069963-b12a-4c08-9634-535cbb626b52 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d960}] Aliases:map[]}"
	Nov 24 04:13:18 old-k8s-version-762702 crio[840]: time="2025-11-24T04:13:18.195338141Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 24 04:13:18 old-k8s-version-762702 crio[840]: time="2025-11-24T04:13:18.200186905Z" level=info msg="Ran pod sandbox 3becf06db1e84ff45c2e1419c0f0650c59b644fba2063935f3cf18a44638fd85 with infra container: default/busybox/POD" id=d736b349-df0a-4f30-a257-e17ad6567fe5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 04:13:18 old-k8s-version-762702 crio[840]: time="2025-11-24T04:13:18.201247581Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2b4fee8a-31c3-4384-8941-56bbc446bae5 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:13:18 old-k8s-version-762702 crio[840]: time="2025-11-24T04:13:18.201382639Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=2b4fee8a-31c3-4384-8941-56bbc446bae5 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:13:18 old-k8s-version-762702 crio[840]: time="2025-11-24T04:13:18.201425552Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=2b4fee8a-31c3-4384-8941-56bbc446bae5 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:13:18 old-k8s-version-762702 crio[840]: time="2025-11-24T04:13:18.202229379Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=242fe315-13cf-4eb8-957a-7fa5a80f034f name=/runtime.v1.ImageService/PullImage
	Nov 24 04:13:18 old-k8s-version-762702 crio[840]: time="2025-11-24T04:13:18.205406071Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 04:13:20 old-k8s-version-762702 crio[840]: time="2025-11-24T04:13:20.264779406Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=242fe315-13cf-4eb8-957a-7fa5a80f034f name=/runtime.v1.ImageService/PullImage
	Nov 24 04:13:20 old-k8s-version-762702 crio[840]: time="2025-11-24T04:13:20.268041391Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8a059695-7373-4ceb-8dee-8d3d3b30b484 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:13:20 old-k8s-version-762702 crio[840]: time="2025-11-24T04:13:20.27026164Z" level=info msg="Creating container: default/busybox/busybox" id=42486ece-8d86-4fd2-a964-9d9215bfc5e4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:13:20 old-k8s-version-762702 crio[840]: time="2025-11-24T04:13:20.270495678Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:13:20 old-k8s-version-762702 crio[840]: time="2025-11-24T04:13:20.275796378Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:13:20 old-k8s-version-762702 crio[840]: time="2025-11-24T04:13:20.27633008Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:13:20 old-k8s-version-762702 crio[840]: time="2025-11-24T04:13:20.293510592Z" level=info msg="Created container 505591175e38f62e4d82710054ed3ae2ad2d56ac935020f3138349521156bdc3: default/busybox/busybox" id=42486ece-8d86-4fd2-a964-9d9215bfc5e4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:13:20 old-k8s-version-762702 crio[840]: time="2025-11-24T04:13:20.294449091Z" level=info msg="Starting container: 505591175e38f62e4d82710054ed3ae2ad2d56ac935020f3138349521156bdc3" id=d173bc30-93ef-4547-a5dd-3a5e97cb202a name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:13:20 old-k8s-version-762702 crio[840]: time="2025-11-24T04:13:20.297272802Z" level=info msg="Started container" PID=1998 containerID=505591175e38f62e4d82710054ed3ae2ad2d56ac935020f3138349521156bdc3 description=default/busybox/busybox id=d173bc30-93ef-4547-a5dd-3a5e97cb202a name=/runtime.v1.RuntimeService/StartContainer sandboxID=3becf06db1e84ff45c2e1419c0f0650c59b644fba2063935f3cf18a44638fd85
	Nov 24 04:13:28 old-k8s-version-762702 crio[840]: time="2025-11-24T04:13:28.064339757Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	505591175e38f       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago       Running             busybox                   0                   3becf06db1e84       busybox                                          default
	6bbd9b90a3fb0       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      14 seconds ago      Running             coredns                   0                   905335a3019f2       coredns-5dd5756b68-c5hgr                         kube-system
	441d5ed99c345       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago      Running             storage-provisioner       0                   9e157fef03306       storage-provisioner                              kube-system
	6bb70a16abe12       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    25 seconds ago      Running             kindnet-cni               0                   a95506dbd13a3       kindnet-lkhzw                                    kube-system
	cbacfe3637389       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      28 seconds ago      Running             kube-proxy                0                   e442588be61fb       kube-proxy-7ml4n                                 kube-system
	e92eb72ed71de       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      49 seconds ago      Running             kube-apiserver            0                   58e3b0194d603       kube-apiserver-old-k8s-version-762702            kube-system
	9a661ba4ef371       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      49 seconds ago      Running             kube-scheduler            0                   b9b1894e0caa9       kube-scheduler-old-k8s-version-762702            kube-system
	d5d3fc49c5cea       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      49 seconds ago      Running             kube-controller-manager   0                   18778771b573c       kube-controller-manager-old-k8s-version-762702   kube-system
	8c6341646cb85       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      49 seconds ago      Running             etcd                      0                   55fa6badb9f5b       etcd-old-k8s-version-762702                      kube-system
	
	
	==> coredns [6bbd9b90a3fb0989d1bb9b5b3eb9b876d89595765abe0a1314e0113edf1a6bc8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36252 - 8512 "HINFO IN 3221211337082613111.1199075658669265939. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022284764s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-762702
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-762702
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=old-k8s-version-762702
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T04_12_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 04:12:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-762702
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 04:13:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 04:13:18 +0000   Mon, 24 Nov 2025 04:12:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 04:13:18 +0000   Mon, 24 Nov 2025 04:12:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 04:13:18 +0000   Mon, 24 Nov 2025 04:12:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 04:13:18 +0000   Mon, 24 Nov 2025 04:13:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-762702
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 304a86241bf1bbb85bd31db5692386d7
	  System UUID:                1df4042d-4e31-477e-85db-12513191744f
	  Boot ID:                    e6ca431c-3a35-478f-87f6-f49cc4bc8a65
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-5dd5756b68-c5hgr                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-old-k8s-version-762702                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         41s
	  kube-system                 kindnet-lkhzw                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-old-k8s-version-762702             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-762702    200m (10%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-7ml4n                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-old-k8s-version-762702             100m (5%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  NodeHasSufficientMemory  50s (x8 over 50s)  kubelet          Node old-k8s-version-762702 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    50s (x8 over 50s)  kubelet          Node old-k8s-version-762702 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     50s (x8 over 50s)  kubelet          Node old-k8s-version-762702 status is now: NodeHasSufficientPID
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  42s                kubelet          Node old-k8s-version-762702 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s                kubelet          Node old-k8s-version-762702 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s                kubelet          Node old-k8s-version-762702 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s                node-controller  Node old-k8s-version-762702 event: Registered Node old-k8s-version-762702 in Controller
	  Normal  NodeReady                15s                kubelet          Node old-k8s-version-762702 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 03:45] overlayfs: idmapped layers are currently not supported
	[Nov24 03:46] overlayfs: idmapped layers are currently not supported
	[Nov24 03:51] overlayfs: idmapped layers are currently not supported
	[ +32.185990] overlayfs: idmapped layers are currently not supported
	[Nov24 03:52] overlayfs: idmapped layers are currently not supported
	[Nov24 03:54] overlayfs: idmapped layers are currently not supported
	[Nov24 03:55] overlayfs: idmapped layers are currently not supported
	[Nov24 03:56] overlayfs: idmapped layers are currently not supported
	[Nov24 03:57] overlayfs: idmapped layers are currently not supported
	[  +3.077077] overlayfs: idmapped layers are currently not supported
	[Nov24 03:58] overlayfs: idmapped layers are currently not supported
	[ +17.825126] overlayfs: idmapped layers are currently not supported
	[Nov24 03:59] overlayfs: idmapped layers are currently not supported
	[ +28.164324] overlayfs: idmapped layers are currently not supported
	[Nov24 04:01] overlayfs: idmapped layers are currently not supported
	[Nov24 04:02] overlayfs: idmapped layers are currently not supported
	[Nov24 04:04] overlayfs: idmapped layers are currently not supported
	[Nov24 04:05] overlayfs: idmapped layers are currently not supported
	[Nov24 04:06] overlayfs: idmapped layers are currently not supported
	[Nov24 04:08] overlayfs: idmapped layers are currently not supported
	[Nov24 04:10] overlayfs: idmapped layers are currently not supported
	[Nov24 04:11] overlayfs: idmapped layers are currently not supported
	[ +23.918932] overlayfs: idmapped layers are currently not supported
	[Nov24 04:12] overlayfs: idmapped layers are currently not supported
	[ +35.202347] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8c6341646cb85432dd8071a5cfd7907d7d03199af9ba6a5072d88829fbeb8e16] <==
	{"level":"info","ts":"2025-11-24T04:12:39.870914Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-24T04:12:39.8711Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-24T04:12:39.871153Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-24T04:12:39.87264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-24T04:12:39.872797Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-24T04:12:39.872889Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-24T04:12:39.87285Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-24T04:12:40.226892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-24T04:12:40.227018Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-24T04:12:40.227079Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-24T04:12:40.227135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-24T04:12:40.227166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-24T04:12:40.227209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-24T04:12:40.227245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-24T04:12:40.228915Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-762702 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-24T04:12:40.228949Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T04:12:40.229846Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-24T04:12:40.229919Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T04:12:40.230399Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T04:12:40.270318Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-24T04:12:40.23102Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T04:12:40.273589Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T04:12:40.273654Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T04:12:40.273708Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-24T04:12:40.279275Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 04:13:29 up  2:55,  0 user,  load average: 2.27, 3.28, 2.69
	Linux old-k8s-version-762702 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6bb70a16abe12bf4cbd6d36d5e773d596413f4f4af80fb6eb5778f346ce970d6] <==
	I1124 04:13:03.722768       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 04:13:03.722991       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 04:13:03.723118       1 main.go:148] setting mtu 1500 for CNI 
	I1124 04:13:03.723138       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 04:13:03.723152       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T04:13:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 04:13:04.015563       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 04:13:04.017770       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 04:13:04.017889       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 04:13:04.018052       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 04:13:04.218675       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 04:13:04.218818       1 metrics.go:72] Registering metrics
	I1124 04:13:04.218912       1 controller.go:711] "Syncing nftables rules"
	I1124 04:13:13.935700       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 04:13:13.935756       1 main.go:301] handling current node
	I1124 04:13:23.925898       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 04:13:23.926005       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e92eb72ed71de38f3400d77d552d76a87d93679420ace1c6cc54f0c8ff30d355] <==
	I1124 04:12:44.258624       1 aggregator.go:166] initial CRD sync complete...
	I1124 04:12:44.258646       1 autoregister_controller.go:141] Starting autoregister controller
	I1124 04:12:44.258653       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 04:12:44.258661       1 cache.go:39] Caches are synced for autoregister controller
	I1124 04:12:44.260051       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1124 04:12:44.261177       1 controller.go:624] quota admission added evaluator for: namespaces
	I1124 04:12:44.265437       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1124 04:12:44.265464       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1124 04:12:44.285461       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 04:12:44.292351       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1124 04:12:44.964252       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 04:12:44.969700       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 04:12:44.969723       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 04:12:45.778353       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 04:12:45.832417       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 04:12:45.987929       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 04:12:45.995066       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1124 04:12:45.996255       1 controller.go:624] quota admission added evaluator for: endpoints
	I1124 04:12:46.006109       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 04:12:46.239518       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1124 04:12:47.615707       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1124 04:12:47.632849       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 04:12:47.648259       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1124 04:12:59.897255       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1124 04:12:59.941505       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [d5d3fc49c5cea51e3a21675071f1d86b516b461c1984a89cdbf0d8bffcdfec19] <==
	I1124 04:12:59.479800       1 shared_informer.go:318] Caches are synced for PVC protection
	I1124 04:12:59.491837       1 shared_informer.go:318] Caches are synced for resource quota
	I1124 04:12:59.814129       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 04:12:59.829291       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 04:12:59.829339       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1124 04:12:59.903703       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1124 04:12:59.959442       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-lkhzw"
	I1124 04:12:59.964423       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7ml4n"
	I1124 04:13:00.289629       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-l7qc8"
	I1124 04:13:00.352103       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-c5hgr"
	I1124 04:13:00.406555       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="504.219513ms"
	I1124 04:13:00.451756       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="45.123517ms"
	I1124 04:13:00.515210       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.302347ms"
	I1124 04:13:00.515356       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="92.752µs"
	I1124 04:13:01.840439       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1124 04:13:01.865354       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-l7qc8"
	I1124 04:13:01.886716       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="47.166524ms"
	I1124 04:13:01.914173       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="27.400267ms"
	I1124 04:13:01.914512       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="129.569µs"
	I1124 04:13:14.500988       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.21µs"
	I1124 04:13:14.516891       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="55.861µs"
	I1124 04:13:14.962620       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="119.739µs"
	I1124 04:13:15.971899       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.799786ms"
	I1124 04:13:15.973048       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="52.587µs"
	I1124 04:13:19.283949       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [cbacfe3637389f14500aedb2758e456f085b24a9d24ffc913c3fa1c25ab10053] <==
	I1124 04:13:00.924903       1 server_others.go:69] "Using iptables proxy"
	I1124 04:13:00.944963       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1124 04:13:00.991363       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 04:13:00.996698       1 server_others.go:152] "Using iptables Proxier"
	I1124 04:13:00.996732       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1124 04:13:00.996743       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1124 04:13:00.996770       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1124 04:13:00.996982       1 server.go:846] "Version info" version="v1.28.0"
	I1124 04:13:00.996992       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:13:00.998393       1 config.go:188] "Starting service config controller"
	I1124 04:13:00.998407       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1124 04:13:00.998426       1 config.go:97] "Starting endpoint slice config controller"
	I1124 04:13:00.998430       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1124 04:13:01.001426       1 config.go:315] "Starting node config controller"
	I1124 04:13:01.010211       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1124 04:13:01.099273       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1124 04:13:01.099341       1 shared_informer.go:318] Caches are synced for service config
	I1124 04:13:01.110623       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [9a661ba4ef371c23a423b4f2fd535fa188142689df537a4b2bec8b25a54b668c] <==
	W1124 04:12:44.253850       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1124 04:12:44.255146       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1124 04:12:44.255275       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1124 04:12:44.255318       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1124 04:12:45.084421       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1124 04:12:45.084579       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1124 04:12:45.216975       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1124 04:12:45.217047       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1124 04:12:45.223288       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1124 04:12:45.223335       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1124 04:12:45.251566       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1124 04:12:45.251607       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1124 04:12:45.325376       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1124 04:12:45.325419       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1124 04:12:45.327146       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1124 04:12:45.327184       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1124 04:12:45.328732       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1124 04:12:45.328767       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1124 04:12:45.430543       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1124 04:12:45.430578       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1124 04:12:45.432112       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1124 04:12:45.432150       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1124 04:12:45.759704       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1124 04:12:45.759739       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1124 04:12:49.039454       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 24 04:13:00 old-k8s-version-762702 kubelet[1369]: I1124 04:13:00.164462    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/db06bd2a-7e8a-49e3-a17f-62b681f600d1-cni-cfg\") pod \"kindnet-lkhzw\" (UID: \"db06bd2a-7e8a-49e3-a17f-62b681f600d1\") " pod="kube-system/kindnet-lkhzw"
	Nov 24 04:13:00 old-k8s-version-762702 kubelet[1369]: I1124 04:13:00.164547    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db06bd2a-7e8a-49e3-a17f-62b681f600d1-lib-modules\") pod \"kindnet-lkhzw\" (UID: \"db06bd2a-7e8a-49e3-a17f-62b681f600d1\") " pod="kube-system/kindnet-lkhzw"
	Nov 24 04:13:00 old-k8s-version-762702 kubelet[1369]: I1124 04:13:00.164576    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ed410af-141e-4197-9a5c-6900dc8e35e6-lib-modules\") pod \"kube-proxy-7ml4n\" (UID: \"1ed410af-141e-4197-9a5c-6900dc8e35e6\") " pod="kube-system/kube-proxy-7ml4n"
	Nov 24 04:13:00 old-k8s-version-762702 kubelet[1369]: I1124 04:13:00.164624    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqnkb\" (UniqueName: \"kubernetes.io/projected/db06bd2a-7e8a-49e3-a17f-62b681f600d1-kube-api-access-jqnkb\") pod \"kindnet-lkhzw\" (UID: \"db06bd2a-7e8a-49e3-a17f-62b681f600d1\") " pod="kube-system/kindnet-lkhzw"
	Nov 24 04:13:00 old-k8s-version-762702 kubelet[1369]: I1124 04:13:00.164656    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ed410af-141e-4197-9a5c-6900dc8e35e6-xtables-lock\") pod \"kube-proxy-7ml4n\" (UID: \"1ed410af-141e-4197-9a5c-6900dc8e35e6\") " pod="kube-system/kube-proxy-7ml4n"
	Nov 24 04:13:00 old-k8s-version-762702 kubelet[1369]: I1124 04:13:00.164680    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1ed410af-141e-4197-9a5c-6900dc8e35e6-kube-proxy\") pod \"kube-proxy-7ml4n\" (UID: \"1ed410af-141e-4197-9a5c-6900dc8e35e6\") " pod="kube-system/kube-proxy-7ml4n"
	Nov 24 04:13:00 old-k8s-version-762702 kubelet[1369]: I1124 04:13:00.164715    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sh5x\" (UniqueName: \"kubernetes.io/projected/1ed410af-141e-4197-9a5c-6900dc8e35e6-kube-api-access-9sh5x\") pod \"kube-proxy-7ml4n\" (UID: \"1ed410af-141e-4197-9a5c-6900dc8e35e6\") " pod="kube-system/kube-proxy-7ml4n"
	Nov 24 04:13:00 old-k8s-version-762702 kubelet[1369]: I1124 04:13:00.164745    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db06bd2a-7e8a-49e3-a17f-62b681f600d1-xtables-lock\") pod \"kindnet-lkhzw\" (UID: \"db06bd2a-7e8a-49e3-a17f-62b681f600d1\") " pod="kube-system/kindnet-lkhzw"
	Nov 24 04:13:00 old-k8s-version-762702 kubelet[1369]: W1124 04:13:00.606349    1369 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/b9dfaaddc60d2719aecb1147663224db018847e7dac0caa2182ee474b875f47a/crio-e442588be61fba35503c3b9cd2bcc5eb50a69bf910644f40c801193114c6938c WatchSource:0}: Error finding container e442588be61fba35503c3b9cd2bcc5eb50a69bf910644f40c801193114c6938c: Status 404 returned error can't find the container with id e442588be61fba35503c3b9cd2bcc5eb50a69bf910644f40c801193114c6938c
	Nov 24 04:13:00 old-k8s-version-762702 kubelet[1369]: W1124 04:13:00.631504    1369 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/b9dfaaddc60d2719aecb1147663224db018847e7dac0caa2182ee474b875f47a/crio-a95506dbd13a3fb22599d9891cabb6c39c47d90c40a070f2ca8457279a84cebb WatchSource:0}: Error finding container a95506dbd13a3fb22599d9891cabb6c39c47d90c40a070f2ca8457279a84cebb: Status 404 returned error can't find the container with id a95506dbd13a3fb22599d9891cabb6c39c47d90c40a070f2ca8457279a84cebb
	Nov 24 04:13:03 old-k8s-version-762702 kubelet[1369]: I1124 04:13:03.930105    1369 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-7ml4n" podStartSLOduration=4.930060267 podCreationTimestamp="2025-11-24 04:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 04:13:00.925592024 +0000 UTC m=+13.361770715" watchObservedRunningTime="2025-11-24 04:13:03.930060267 +0000 UTC m=+16.366238958"
	Nov 24 04:13:14 old-k8s-version-762702 kubelet[1369]: I1124 04:13:14.465378    1369 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 24 04:13:14 old-k8s-version-762702 kubelet[1369]: I1124 04:13:14.498045    1369 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-lkhzw" podStartSLOduration=12.505871911 podCreationTimestamp="2025-11-24 04:12:59 +0000 UTC" firstStartedPulling="2025-11-24 04:13:00.647876303 +0000 UTC m=+13.084054986" lastFinishedPulling="2025-11-24 04:13:03.640006879 +0000 UTC m=+16.076185562" observedRunningTime="2025-11-24 04:13:03.93157187 +0000 UTC m=+16.367750586" watchObservedRunningTime="2025-11-24 04:13:14.498002487 +0000 UTC m=+26.934181178"
	Nov 24 04:13:14 old-k8s-version-762702 kubelet[1369]: I1124 04:13:14.498526    1369 topology_manager.go:215] "Topology Admit Handler" podUID="7d0b287f-b2e8-461f-abf4-71700b66caf8" podNamespace="kube-system" podName="coredns-5dd5756b68-c5hgr"
	Nov 24 04:13:14 old-k8s-version-762702 kubelet[1369]: I1124 04:13:14.499727    1369 topology_manager.go:215] "Topology Admit Handler" podUID="8af39921-2789-4cc5-974a-89f0667a6e47" podNamespace="kube-system" podName="storage-provisioner"
	Nov 24 04:13:14 old-k8s-version-762702 kubelet[1369]: I1124 04:13:14.673675    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d0b287f-b2e8-461f-abf4-71700b66caf8-config-volume\") pod \"coredns-5dd5756b68-c5hgr\" (UID: \"7d0b287f-b2e8-461f-abf4-71700b66caf8\") " pod="kube-system/coredns-5dd5756b68-c5hgr"
	Nov 24 04:13:14 old-k8s-version-762702 kubelet[1369]: I1124 04:13:14.673816    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7r62\" (UniqueName: \"kubernetes.io/projected/7d0b287f-b2e8-461f-abf4-71700b66caf8-kube-api-access-z7r62\") pod \"coredns-5dd5756b68-c5hgr\" (UID: \"7d0b287f-b2e8-461f-abf4-71700b66caf8\") " pod="kube-system/coredns-5dd5756b68-c5hgr"
	Nov 24 04:13:14 old-k8s-version-762702 kubelet[1369]: I1124 04:13:14.673846    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8af39921-2789-4cc5-974a-89f0667a6e47-tmp\") pod \"storage-provisioner\" (UID: \"8af39921-2789-4cc5-974a-89f0667a6e47\") " pod="kube-system/storage-provisioner"
	Nov 24 04:13:14 old-k8s-version-762702 kubelet[1369]: I1124 04:13:14.673905    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8cgh\" (UniqueName: \"kubernetes.io/projected/8af39921-2789-4cc5-974a-89f0667a6e47-kube-api-access-k8cgh\") pod \"storage-provisioner\" (UID: \"8af39921-2789-4cc5-974a-89f0667a6e47\") " pod="kube-system/storage-provisioner"
	Nov 24 04:13:14 old-k8s-version-762702 kubelet[1369]: W1124 04:13:14.859085    1369 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/b9dfaaddc60d2719aecb1147663224db018847e7dac0caa2182ee474b875f47a/crio-905335a3019f28f3bbc87bdf7dfafdf212cf95a3ab26bdb0dce6fda28d6413fe WatchSource:0}: Error finding container 905335a3019f28f3bbc87bdf7dfafdf212cf95a3ab26bdb0dce6fda28d6413fe: Status 404 returned error can't find the container with id 905335a3019f28f3bbc87bdf7dfafdf212cf95a3ab26bdb0dce6fda28d6413fe
	Nov 24 04:13:14 old-k8s-version-762702 kubelet[1369]: I1124 04:13:14.998227    1369 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-c5hgr" podStartSLOduration=14.998164943999999 podCreationTimestamp="2025-11-24 04:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 04:13:14.96169317 +0000 UTC m=+27.397871853" watchObservedRunningTime="2025-11-24 04:13:14.998164944 +0000 UTC m=+27.434343627"
	Nov 24 04:13:15 old-k8s-version-762702 kubelet[1369]: I1124 04:13:15.957265    1369 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.957212178 podCreationTimestamp="2025-11-24 04:13:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 04:13:14.999282662 +0000 UTC m=+27.435461370" watchObservedRunningTime="2025-11-24 04:13:15.957212178 +0000 UTC m=+28.393390861"
	Nov 24 04:13:17 old-k8s-version-762702 kubelet[1369]: I1124 04:13:17.869366    1369 topology_manager.go:215] "Topology Admit Handler" podUID="9b6392ee-0350-4790-80de-baef7e6db4f3" podNamespace="default" podName="busybox"
	Nov 24 04:13:18 old-k8s-version-762702 kubelet[1369]: I1124 04:13:17.997925    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p728g\" (UniqueName: \"kubernetes.io/projected/9b6392ee-0350-4790-80de-baef7e6db4f3-kube-api-access-p728g\") pod \"busybox\" (UID: \"9b6392ee-0350-4790-80de-baef7e6db4f3\") " pod="default/busybox"
	Nov 24 04:13:18 old-k8s-version-762702 kubelet[1369]: W1124 04:13:18.197106    1369 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/b9dfaaddc60d2719aecb1147663224db018847e7dac0caa2182ee474b875f47a/crio-3becf06db1e84ff45c2e1419c0f0650c59b644fba2063935f3cf18a44638fd85 WatchSource:0}: Error finding container 3becf06db1e84ff45c2e1419c0f0650c59b644fba2063935f3cf18a44638fd85: Status 404 returned error can't find the container with id 3becf06db1e84ff45c2e1419c0f0650c59b644fba2063935f3cf18a44638fd85
	
	
	==> storage-provisioner [441d5ed99c345c3a7c4beda418de2c5fa7368959ed8d108855ac19f96ab45d1f] <==
	I1124 04:13:14.913730       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 04:13:14.973623       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 04:13:14.986243       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1124 04:13:15.009311       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 04:13:15.009663       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-762702_2bd79681-adcc-4076-a150-6e8807659218!
	I1124 04:13:15.017848       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cfe1276a-d502-46a4-811c-2d6200e130b0", APIVersion:"v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-762702_2bd79681-adcc-4076-a150-6e8807659218 became leader
	I1124 04:13:15.110211       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-762702_2bd79681-adcc-4076-a150-6e8807659218!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-762702 -n old-k8s-version-762702
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-762702 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-762702 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-762702 --alsologtostderr -v=1: exit status 80 (2.140802201s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-762702 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 04:14:44.656672  475169 out.go:360] Setting OutFile to fd 1 ...
	I1124 04:14:44.656886  475169 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:14:44.656898  475169 out.go:374] Setting ErrFile to fd 2...
	I1124 04:14:44.656904  475169 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:14:44.657162  475169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 04:14:44.657452  475169 out.go:368] Setting JSON to false
	I1124 04:14:44.657484  475169 mustload.go:66] Loading cluster: old-k8s-version-762702
	I1124 04:14:44.657882  475169 config.go:182] Loaded profile config "old-k8s-version-762702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 04:14:44.658392  475169 cli_runner.go:164] Run: docker container inspect old-k8s-version-762702 --format={{.State.Status}}
	I1124 04:14:44.678867  475169 host.go:66] Checking if "old-k8s-version-762702" exists ...
	I1124 04:14:44.679196  475169 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:14:44.747423  475169 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-24 04:14:44.738187699 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:14:44.748049  475169 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763935228-21975/minikube-v1.37.0-1763935228-21975-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763935228-21975-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-762702 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 04:14:44.751525  475169 out.go:179] * Pausing node old-k8s-version-762702 ... 
	I1124 04:14:44.754366  475169 host.go:66] Checking if "old-k8s-version-762702" exists ...
	I1124 04:14:44.755053  475169 ssh_runner.go:195] Run: systemctl --version
	I1124 04:14:44.755110  475169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:14:44.772569  475169 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/old-k8s-version-762702/id_rsa Username:docker}
	I1124 04:14:44.877438  475169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:14:44.895801  475169 pause.go:52] kubelet running: true
	I1124 04:14:44.895864  475169 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 04:14:45.267192  475169 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 04:14:45.267293  475169 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 04:14:45.351859  475169 cri.go:89] found id: "6d5b65108d13f59e0a0172242b6877e1aa183242ced52c6bb7d03102dd8bc068"
	I1124 04:14:45.351887  475169 cri.go:89] found id: "7a9b8cf11ce99604979c137a579bdbc8e2fadf7960914a459db24120e33d0076"
	I1124 04:14:45.351893  475169 cri.go:89] found id: "d08b3a0aabdbd70340b4745beb5f9d34a57c7e1f07a3837a7b5ed36377e70cff"
	I1124 04:14:45.351898  475169 cri.go:89] found id: "0baddab83fc97881a782442c419e631b24a2e0920b5bbc40571b6ca47409b609"
	I1124 04:14:45.351901  475169 cri.go:89] found id: "8cd173cea9a6e0c8848501a56105eae2eb7845d3ad6d5d080437ff7aea8df499"
	I1124 04:14:45.351904  475169 cri.go:89] found id: "3dda3e01322889ba6ae662cf36d250b792923223d15823ac24db5c9c42c3272c"
	I1124 04:14:45.351908  475169 cri.go:89] found id: "b120202a1fd97058a5aedf9f2bb21f0de530aaeecb2a7185c93067ac1ee7214d"
	I1124 04:14:45.351911  475169 cri.go:89] found id: "54e9d746b3ca2739d8be883f3078b9d3c9c03574f0b6d7975d0cec75f406d75d"
	I1124 04:14:45.351914  475169 cri.go:89] found id: "bbf50eb55a9501a33ac2de73d034111945ecb64e8907c5f3016c733432c67d30"
	I1124 04:14:45.351921  475169 cri.go:89] found id: "b89b8f495cfe78a41475c2ab6476b4f7445f50c81d516ab1dd5a3fd23f6c3420"
	I1124 04:14:45.351926  475169 cri.go:89] found id: "0b878cc21f861b10f2b465a433c6552197db070682805f4c2674b0aa81bf3844"
	I1124 04:14:45.351929  475169 cri.go:89] found id: ""
	I1124 04:14:45.351979  475169 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 04:14:45.380451  475169 retry.go:31] will retry after 266.222826ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:14:45Z" level=error msg="open /run/runc: no such file or directory"
	I1124 04:14:45.647004  475169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:14:45.661854  475169 pause.go:52] kubelet running: false
	I1124 04:14:45.661929  475169 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 04:14:45.854324  475169 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 04:14:45.854403  475169 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 04:14:45.924267  475169 cri.go:89] found id: "6d5b65108d13f59e0a0172242b6877e1aa183242ced52c6bb7d03102dd8bc068"
	I1124 04:14:45.924292  475169 cri.go:89] found id: "7a9b8cf11ce99604979c137a579bdbc8e2fadf7960914a459db24120e33d0076"
	I1124 04:14:45.924298  475169 cri.go:89] found id: "d08b3a0aabdbd70340b4745beb5f9d34a57c7e1f07a3837a7b5ed36377e70cff"
	I1124 04:14:45.924302  475169 cri.go:89] found id: "0baddab83fc97881a782442c419e631b24a2e0920b5bbc40571b6ca47409b609"
	I1124 04:14:45.924305  475169 cri.go:89] found id: "8cd173cea9a6e0c8848501a56105eae2eb7845d3ad6d5d080437ff7aea8df499"
	I1124 04:14:45.924309  475169 cri.go:89] found id: "3dda3e01322889ba6ae662cf36d250b792923223d15823ac24db5c9c42c3272c"
	I1124 04:14:45.924316  475169 cri.go:89] found id: "b120202a1fd97058a5aedf9f2bb21f0de530aaeecb2a7185c93067ac1ee7214d"
	I1124 04:14:45.924319  475169 cri.go:89] found id: "54e9d746b3ca2739d8be883f3078b9d3c9c03574f0b6d7975d0cec75f406d75d"
	I1124 04:14:45.924322  475169 cri.go:89] found id: "bbf50eb55a9501a33ac2de73d034111945ecb64e8907c5f3016c733432c67d30"
	I1124 04:14:45.924328  475169 cri.go:89] found id: "b89b8f495cfe78a41475c2ab6476b4f7445f50c81d516ab1dd5a3fd23f6c3420"
	I1124 04:14:45.924332  475169 cri.go:89] found id: "0b878cc21f861b10f2b465a433c6552197db070682805f4c2674b0aa81bf3844"
	I1124 04:14:45.924335  475169 cri.go:89] found id: ""
	I1124 04:14:45.924390  475169 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 04:14:45.935549  475169 retry.go:31] will retry after 494.475893ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:14:45Z" level=error msg="open /run/runc: no such file or directory"
	I1124 04:14:46.430248  475169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:14:46.443654  475169 pause.go:52] kubelet running: false
	I1124 04:14:46.443734  475169 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 04:14:46.608314  475169 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 04:14:46.608423  475169 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 04:14:46.693802  475169 cri.go:89] found id: "6d5b65108d13f59e0a0172242b6877e1aa183242ced52c6bb7d03102dd8bc068"
	I1124 04:14:46.693827  475169 cri.go:89] found id: "7a9b8cf11ce99604979c137a579bdbc8e2fadf7960914a459db24120e33d0076"
	I1124 04:14:46.693832  475169 cri.go:89] found id: "d08b3a0aabdbd70340b4745beb5f9d34a57c7e1f07a3837a7b5ed36377e70cff"
	I1124 04:14:46.693837  475169 cri.go:89] found id: "0baddab83fc97881a782442c419e631b24a2e0920b5bbc40571b6ca47409b609"
	I1124 04:14:46.693840  475169 cri.go:89] found id: "8cd173cea9a6e0c8848501a56105eae2eb7845d3ad6d5d080437ff7aea8df499"
	I1124 04:14:46.693844  475169 cri.go:89] found id: "3dda3e01322889ba6ae662cf36d250b792923223d15823ac24db5c9c42c3272c"
	I1124 04:14:46.693847  475169 cri.go:89] found id: "b120202a1fd97058a5aedf9f2bb21f0de530aaeecb2a7185c93067ac1ee7214d"
	I1124 04:14:46.693850  475169 cri.go:89] found id: "54e9d746b3ca2739d8be883f3078b9d3c9c03574f0b6d7975d0cec75f406d75d"
	I1124 04:14:46.693853  475169 cri.go:89] found id: "bbf50eb55a9501a33ac2de73d034111945ecb64e8907c5f3016c733432c67d30"
	I1124 04:14:46.693859  475169 cri.go:89] found id: "b89b8f495cfe78a41475c2ab6476b4f7445f50c81d516ab1dd5a3fd23f6c3420"
	I1124 04:14:46.693862  475169 cri.go:89] found id: "0b878cc21f861b10f2b465a433c6552197db070682805f4c2674b0aa81bf3844"
	I1124 04:14:46.693866  475169 cri.go:89] found id: ""
	I1124 04:14:46.693930  475169 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 04:14:46.712588  475169 out.go:203] 
	W1124 04:14:46.716121  475169 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:14:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:14:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 04:14:46.716149  475169 out.go:285] * 
	* 
	W1124 04:14:46.722260  475169 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 04:14:46.725543  475169 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-762702 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-762702
helpers_test.go:243: (dbg) docker inspect old-k8s-version-762702:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b9dfaaddc60d2719aecb1147663224db018847e7dac0caa2182ee474b875f47a",
	        "Created": "2025-11-24T04:12:22.608705618Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 473035,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T04:13:43.165474035Z",
	            "FinishedAt": "2025-11-24T04:13:42.309924062Z"
	        },
	        "Image": "sha256:fbb44bc62521f331457dff002aaa5e1e27856f9e53853b3b3ee62969be454028",
	        "ResolvConfPath": "/var/lib/docker/containers/b9dfaaddc60d2719aecb1147663224db018847e7dac0caa2182ee474b875f47a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b9dfaaddc60d2719aecb1147663224db018847e7dac0caa2182ee474b875f47a/hostname",
	        "HostsPath": "/var/lib/docker/containers/b9dfaaddc60d2719aecb1147663224db018847e7dac0caa2182ee474b875f47a/hosts",
	        "LogPath": "/var/lib/docker/containers/b9dfaaddc60d2719aecb1147663224db018847e7dac0caa2182ee474b875f47a/b9dfaaddc60d2719aecb1147663224db018847e7dac0caa2182ee474b875f47a-json.log",
	        "Name": "/old-k8s-version-762702",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-762702:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-762702",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b9dfaaddc60d2719aecb1147663224db018847e7dac0caa2182ee474b875f47a",
	                "LowerDir": "/var/lib/docker/overlay2/653c33f0be4a366cb5cc86ca2501e9ef033df8c8abee4cc8bc2eca215ba11542-init/diff:/var/lib/docker/overlay2/d0aa28be488ed1454a92ae6ce1d851cbc3e880fd8a322d63788b1835a346ec13/diff",
	                "MergedDir": "/var/lib/docker/overlay2/653c33f0be4a366cb5cc86ca2501e9ef033df8c8abee4cc8bc2eca215ba11542/merged",
	                "UpperDir": "/var/lib/docker/overlay2/653c33f0be4a366cb5cc86ca2501e9ef033df8c8abee4cc8bc2eca215ba11542/diff",
	                "WorkDir": "/var/lib/docker/overlay2/653c33f0be4a366cb5cc86ca2501e9ef033df8c8abee4cc8bc2eca215ba11542/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-762702",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-762702/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-762702",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-762702",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-762702",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "099dd3f6b3c1284a58643ae9891d5cfc2f89029daa0e8b273f37a5c3d01e7f9c",
	            "SandboxKey": "/var/run/docker/netns/099dd3f6b3c1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-762702": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:d8:b1:8b:0f:7c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2839db71c04bcd656cafcc00851680b9c7cc53726d05c9804df0e7524d958ffa",
	                    "EndpointID": "d6bb48f3a820359570e5a65cd4df6e36300a2acdddb5a87c453e99d869ab94bb",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-762702",
	                        "b9dfaaddc60d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-762702 -n old-k8s-version-762702
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-762702 -n old-k8s-version-762702: exit status 2 (375.743493ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-762702 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-762702 logs -n 25: (1.335802367s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-778509 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo containerd config dump                                                                                                                                                                                                  │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo crio config                                                                                                                                                                                                             │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ delete  │ -p cilium-778509                                                                                                                                                                                                                              │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │ 24 Nov 25 04:10 UTC │
	│ start   │ -p force-systemd-env-400958 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-400958  │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │ 24 Nov 25 04:11 UTC │
	│ delete  │ -p kubernetes-upgrade-207884                                                                                                                                                                                                                  │ kubernetes-upgrade-207884 │ jenkins │ v1.37.0 │ 24 Nov 25 04:11 UTC │ 24 Nov 25 04:11 UTC │
	│ start   │ -p cert-expiration-918798 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-918798    │ jenkins │ v1.37.0 │ 24 Nov 25 04:11 UTC │ 24 Nov 25 04:11 UTC │
	│ delete  │ -p force-systemd-env-400958                                                                                                                                                                                                                   │ force-systemd-env-400958  │ jenkins │ v1.37.0 │ 24 Nov 25 04:11 UTC │ 24 Nov 25 04:11 UTC │
	│ start   │ -p cert-options-967682 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-967682       │ jenkins │ v1.37.0 │ 24 Nov 25 04:11 UTC │ 24 Nov 25 04:12 UTC │
	│ ssh     │ cert-options-967682 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-967682       │ jenkins │ v1.37.0 │ 24 Nov 25 04:12 UTC │ 24 Nov 25 04:12 UTC │
	│ ssh     │ -p cert-options-967682 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-967682       │ jenkins │ v1.37.0 │ 24 Nov 25 04:12 UTC │ 24 Nov 25 04:12 UTC │
	│ delete  │ -p cert-options-967682                                                                                                                                                                                                                        │ cert-options-967682       │ jenkins │ v1.37.0 │ 24 Nov 25 04:12 UTC │ 24 Nov 25 04:12 UTC │
	│ start   │ -p old-k8s-version-762702 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:12 UTC │ 24 Nov 25 04:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-762702 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:13 UTC │                     │
	│ stop    │ -p old-k8s-version-762702 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:13 UTC │ 24 Nov 25 04:13 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-762702 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:13 UTC │ 24 Nov 25 04:13 UTC │
	│ start   │ -p old-k8s-version-762702 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:13 UTC │ 24 Nov 25 04:14 UTC │
	│ image   │ old-k8s-version-762702 image list --format=json                                                                                                                                                                                               │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:14 UTC │
	│ pause   │ -p old-k8s-version-762702 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 04:13:42
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 04:13:42.860676  472908 out.go:360] Setting OutFile to fd 1 ...
	I1124 04:13:42.860798  472908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:13:42.860809  472908 out.go:374] Setting ErrFile to fd 2...
	I1124 04:13:42.860815  472908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:13:42.861095  472908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 04:13:42.861494  472908 out.go:368] Setting JSON to false
	I1124 04:13:42.862408  472908 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10552,"bootTime":1763947071,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 04:13:42.862562  472908 start.go:143] virtualization:  
	I1124 04:13:42.867689  472908 out.go:179] * [old-k8s-version-762702] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 04:13:42.870750  472908 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 04:13:42.870900  472908 notify.go:221] Checking for updates...
	I1124 04:13:42.877144  472908 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 04:13:42.880228  472908 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:13:42.883588  472908 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	I1124 04:13:42.888507  472908 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 04:13:42.891557  472908 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 04:13:42.895126  472908 config.go:182] Loaded profile config "old-k8s-version-762702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 04:13:42.898794  472908 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1124 04:13:42.901560  472908 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 04:13:42.938973  472908 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 04:13:42.939113  472908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:13:43.014344  472908 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 04:13:43.002806559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:13:43.014490  472908 docker.go:319] overlay module found
	I1124 04:13:43.017722  472908 out.go:179] * Using the docker driver based on existing profile
	I1124 04:13:43.020610  472908 start.go:309] selected driver: docker
	I1124 04:13:43.020641  472908 start.go:927] validating driver "docker" against &{Name:old-k8s-version-762702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-762702 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:13:43.020754  472908 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 04:13:43.021487  472908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:13:43.078400  472908 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 04:13:43.068062515 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:13:43.078968  472908 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 04:13:43.079000  472908 cni.go:84] Creating CNI manager for ""
	I1124 04:13:43.079063  472908 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:13:43.079103  472908 start.go:353] cluster config:
	{Name:old-k8s-version-762702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-762702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:13:43.082330  472908 out.go:179] * Starting "old-k8s-version-762702" primary control-plane node in "old-k8s-version-762702" cluster
	I1124 04:13:43.085167  472908 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 04:13:43.088115  472908 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 04:13:43.090952  472908 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 04:13:43.091002  472908 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1124 04:13:43.091016  472908 cache.go:65] Caching tarball of preloaded images
	I1124 04:13:43.091027  472908 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 04:13:43.091110  472908 preload.go:238] Found /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 04:13:43.091121  472908 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1124 04:13:43.091237  472908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/config.json ...
	I1124 04:13:43.111965  472908 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 04:13:43.111986  472908 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 04:13:43.112007  472908 cache.go:243] Successfully downloaded all kic artifacts
	I1124 04:13:43.112037  472908 start.go:360] acquireMachinesLock for old-k8s-version-762702: {Name:mk39e7bd6d63be24b0c5297d3d6b80f2dd18eb45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 04:13:43.112101  472908 start.go:364] duration metric: took 39.098µs to acquireMachinesLock for "old-k8s-version-762702"
	I1124 04:13:43.112124  472908 start.go:96] Skipping create...Using existing machine configuration
	I1124 04:13:43.112131  472908 fix.go:54] fixHost starting: 
	I1124 04:13:43.112398  472908 cli_runner.go:164] Run: docker container inspect old-k8s-version-762702 --format={{.State.Status}}
	I1124 04:13:43.129439  472908 fix.go:112] recreateIfNeeded on old-k8s-version-762702: state=Stopped err=<nil>
	W1124 04:13:43.129468  472908 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 04:13:43.132773  472908 out.go:252] * Restarting existing docker container for "old-k8s-version-762702" ...
	I1124 04:13:43.132882  472908 cli_runner.go:164] Run: docker start old-k8s-version-762702
	I1124 04:13:43.401012  472908 cli_runner.go:164] Run: docker container inspect old-k8s-version-762702 --format={{.State.Status}}
	I1124 04:13:43.423905  472908 kic.go:430] container "old-k8s-version-762702" state is running.
	I1124 04:13:43.424317  472908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-762702
	I1124 04:13:43.448402  472908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/config.json ...
	I1124 04:13:43.448635  472908 machine.go:94] provisionDockerMachine start ...
	I1124 04:13:43.448695  472908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:13:43.473422  472908 main.go:143] libmachine: Using SSH client type: native
	I1124 04:13:43.473766  472908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33426 <nil> <nil>}
	I1124 04:13:43.473782  472908 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 04:13:43.474483  472908 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 04:13:46.622216  472908 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-762702
	
	I1124 04:13:46.622239  472908 ubuntu.go:182] provisioning hostname "old-k8s-version-762702"
	I1124 04:13:46.622313  472908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:13:46.640428  472908 main.go:143] libmachine: Using SSH client type: native
	I1124 04:13:46.640746  472908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33426 <nil> <nil>}
	I1124 04:13:46.640758  472908 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-762702 && echo "old-k8s-version-762702" | sudo tee /etc/hostname
	I1124 04:13:46.802208  472908 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-762702
	
	I1124 04:13:46.802284  472908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:13:46.822047  472908 main.go:143] libmachine: Using SSH client type: native
	I1124 04:13:46.822379  472908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33426 <nil> <nil>}
	I1124 04:13:46.822403  472908 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-762702' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-762702/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-762702' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 04:13:46.970877  472908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 04:13:46.970917  472908 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-289526/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-289526/.minikube}
	I1124 04:13:46.970969  472908 ubuntu.go:190] setting up certificates
	I1124 04:13:46.970979  472908 provision.go:84] configureAuth start
	I1124 04:13:46.971052  472908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-762702
	I1124 04:13:46.993312  472908 provision.go:143] copyHostCerts
	I1124 04:13:46.993401  472908 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem, removing ...
	I1124 04:13:46.993420  472908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem
	I1124 04:13:46.993503  472908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem (1082 bytes)
	I1124 04:13:46.993604  472908 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem, removing ...
	I1124 04:13:46.993617  472908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem
	I1124 04:13:46.993645  472908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem (1123 bytes)
	I1124 04:13:46.993751  472908 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem, removing ...
	I1124 04:13:46.993763  472908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem
	I1124 04:13:46.993791  472908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem (1675 bytes)
	I1124 04:13:46.993845  472908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-762702 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-762702]
	I1124 04:13:47.446820  472908 provision.go:177] copyRemoteCerts
	I1124 04:13:47.446888  472908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 04:13:47.446935  472908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:13:47.463932  472908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/old-k8s-version-762702/id_rsa Username:docker}
	I1124 04:13:47.566124  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 04:13:47.583683  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1124 04:13:47.600988  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 04:13:47.619125  472908 provision.go:87] duration metric: took 648.119496ms to configureAuth
	I1124 04:13:47.619152  472908 ubuntu.go:206] setting minikube options for container-runtime
	I1124 04:13:47.619352  472908 config.go:182] Loaded profile config "old-k8s-version-762702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 04:13:47.619457  472908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:13:47.637149  472908 main.go:143] libmachine: Using SSH client type: native
	I1124 04:13:47.637462  472908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33426 <nil> <nil>}
	I1124 04:13:47.637474  472908 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 04:13:48.008204  472908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 04:13:48.008290  472908 machine.go:97] duration metric: took 4.559644457s to provisionDockerMachine
	I1124 04:13:48.008320  472908 start.go:293] postStartSetup for "old-k8s-version-762702" (driver="docker")
	I1124 04:13:48.008359  472908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 04:13:48.008476  472908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 04:13:48.008543  472908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:13:48.027417  472908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/old-k8s-version-762702/id_rsa Username:docker}
	I1124 04:13:48.134817  472908 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 04:13:48.138654  472908 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 04:13:48.138692  472908 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 04:13:48.138734  472908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/addons for local assets ...
	I1124 04:13:48.138903  472908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/files for local assets ...
	I1124 04:13:48.139022  472908 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem -> 2913892.pem in /etc/ssl/certs
	I1124 04:13:48.139135  472908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 04:13:48.148946  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:13:48.167262  472908 start.go:296] duration metric: took 158.900199ms for postStartSetup
	I1124 04:13:48.167364  472908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 04:13:48.167418  472908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:13:48.185664  472908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/old-k8s-version-762702/id_rsa Username:docker}
	I1124 04:13:48.287803  472908 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 04:13:48.292841  472908 fix.go:56] duration metric: took 5.180702363s for fixHost
	I1124 04:13:48.292870  472908 start.go:83] releasing machines lock for "old-k8s-version-762702", held for 5.180756139s
	I1124 04:13:48.292950  472908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-762702
	I1124 04:13:48.310858  472908 ssh_runner.go:195] Run: cat /version.json
	I1124 04:13:48.310876  472908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 04:13:48.310909  472908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:13:48.310930  472908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:13:48.328636  472908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/old-k8s-version-762702/id_rsa Username:docker}
	I1124 04:13:48.330518  472908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/old-k8s-version-762702/id_rsa Username:docker}
	I1124 04:13:48.434254  472908 ssh_runner.go:195] Run: systemctl --version
	I1124 04:13:48.527259  472908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 04:13:48.569399  472908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 04:13:48.574795  472908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 04:13:48.574871  472908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 04:13:48.582872  472908 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 04:13:48.582895  472908 start.go:496] detecting cgroup driver to use...
	I1124 04:13:48.582928  472908 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 04:13:48.582976  472908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 04:13:48.598129  472908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 04:13:48.611571  472908 docker.go:218] disabling cri-docker service (if available) ...
	I1124 04:13:48.611680  472908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 04:13:48.627347  472908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 04:13:48.641021  472908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 04:13:48.764080  472908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 04:13:48.888025  472908 docker.go:234] disabling docker service ...
	I1124 04:13:48.888091  472908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 04:13:48.903536  472908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 04:13:48.916985  472908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 04:13:49.050246  472908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 04:13:49.195890  472908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 04:13:49.209751  472908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 04:13:49.225218  472908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1124 04:13:49.225308  472908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:13:49.235181  472908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 04:13:49.235274  472908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:13:49.244484  472908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:13:49.254835  472908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:13:49.263648  472908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 04:13:49.271897  472908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:13:49.281274  472908 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:13:49.290391  472908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:13:49.299678  472908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 04:13:49.307561  472908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 04:13:49.315115  472908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:13:49.440135  472908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 04:13:49.605005  472908 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 04:13:49.605087  472908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 04:13:49.609005  472908 start.go:564] Will wait 60s for crictl version
	I1124 04:13:49.609093  472908 ssh_runner.go:195] Run: which crictl
	I1124 04:13:49.612829  472908 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 04:13:49.637853  472908 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 04:13:49.637945  472908 ssh_runner.go:195] Run: crio --version
	I1124 04:13:49.673388  472908 ssh_runner.go:195] Run: crio --version
	I1124 04:13:49.714589  472908 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1124 04:13:49.717524  472908 cli_runner.go:164] Run: docker network inspect old-k8s-version-762702 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 04:13:49.733676  472908 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 04:13:49.737757  472908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:13:49.747222  472908 kubeadm.go:884] updating cluster {Name:old-k8s-version-762702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-762702 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 04:13:49.747354  472908 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 04:13:49.747415  472908 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:13:49.782232  472908 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:13:49.782259  472908 crio.go:433] Images already preloaded, skipping extraction
	I1124 04:13:49.782324  472908 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:13:49.808943  472908 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:13:49.808967  472908 cache_images.go:86] Images are preloaded, skipping loading
	I1124 04:13:49.808975  472908 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1124 04:13:49.809071  472908 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-762702 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-762702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 04:13:49.809150  472908 ssh_runner.go:195] Run: crio config
	I1124 04:13:49.884330  472908 cni.go:84] Creating CNI manager for ""
	I1124 04:13:49.884353  472908 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:13:49.884400  472908 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 04:13:49.884430  472908 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-762702 NodeName:old-k8s-version-762702 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 04:13:49.884598  472908 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-762702"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 04:13:49.884671  472908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1124 04:13:49.892291  472908 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 04:13:49.892385  472908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 04:13:49.899738  472908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1124 04:13:49.912667  472908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 04:13:49.925261  472908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1124 04:13:49.938702  472908 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 04:13:49.942186  472908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:13:49.952586  472908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:13:50.075118  472908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:13:50.096741  472908 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702 for IP: 192.168.85.2
	I1124 04:13:50.096765  472908 certs.go:195] generating shared ca certs ...
	I1124 04:13:50.096808  472908 certs.go:227] acquiring lock for ca certs: {Name:mk13d1a72b5c7901cf99d88e558f62c6b8512807 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:13:50.097011  472908 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key
	I1124 04:13:50.097092  472908 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key
	I1124 04:13:50.097110  472908 certs.go:257] generating profile certs ...
	I1124 04:13:50.097249  472908 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/client.key
	I1124 04:13:50.097344  472908 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/apiserver.key.8fa10a20
	I1124 04:13:50.097424  472908 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/proxy-client.key
	I1124 04:13:50.097557  472908 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem (1338 bytes)
	W1124 04:13:50.097611  472908 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389_empty.pem, impossibly tiny 0 bytes
	I1124 04:13:50.097628  472908 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 04:13:50.097675  472908 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem (1082 bytes)
	I1124 04:13:50.097720  472908 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem (1123 bytes)
	I1124 04:13:50.097753  472908 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem (1675 bytes)
	I1124 04:13:50.097862  472908 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:13:50.098594  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 04:13:50.122978  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 04:13:50.144621  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 04:13:50.168399  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 04:13:50.198261  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1124 04:13:50.217699  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 04:13:50.243700  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 04:13:50.270104  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 04:13:50.297220  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /usr/share/ca-certificates/2913892.pem (1708 bytes)
	I1124 04:13:50.322851  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 04:13:50.345027  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem --> /usr/share/ca-certificates/291389.pem (1338 bytes)
	I1124 04:13:50.363912  472908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 04:13:50.377647  472908 ssh_runner.go:195] Run: openssl version
	I1124 04:13:50.384187  472908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2913892.pem && ln -fs /usr/share/ca-certificates/2913892.pem /etc/ssl/certs/2913892.pem"
	I1124 04:13:50.392569  472908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2913892.pem
	I1124 04:13:50.396302  472908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 03:20 /usr/share/ca-certificates/2913892.pem
	I1124 04:13:50.396398  472908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2913892.pem
	I1124 04:13:50.439770  472908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2913892.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 04:13:50.447946  472908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 04:13:50.456187  472908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:13:50.459899  472908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 03:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:13:50.459967  472908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:13:50.500832  472908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 04:13:50.508897  472908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291389.pem && ln -fs /usr/share/ca-certificates/291389.pem /etc/ssl/certs/291389.pem"
	I1124 04:13:50.517318  472908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291389.pem
	I1124 04:13:50.521166  472908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 03:20 /usr/share/ca-certificates/291389.pem
	I1124 04:13:50.521248  472908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291389.pem
	I1124 04:13:50.566723  472908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291389.pem /etc/ssl/certs/51391683.0"
	I1124 04:13:50.574934  472908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 04:13:50.578792  472908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 04:13:50.620168  472908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 04:13:50.661568  472908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 04:13:50.702922  472908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 04:13:50.754620  472908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 04:13:50.820890  472908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 04:13:50.879323  472908 kubeadm.go:401] StartCluster: {Name:old-k8s-version-762702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-762702 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:13:50.879413  472908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 04:13:50.879527  472908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 04:13:50.948487  472908 cri.go:89] found id: "3dda3e01322889ba6ae662cf36d250b792923223d15823ac24db5c9c42c3272c"
	I1124 04:13:50.948510  472908 cri.go:89] found id: "b120202a1fd97058a5aedf9f2bb21f0de530aaeecb2a7185c93067ac1ee7214d"
	I1124 04:13:50.948516  472908 cri.go:89] found id: "54e9d746b3ca2739d8be883f3078b9d3c9c03574f0b6d7975d0cec75f406d75d"
	I1124 04:13:50.948551  472908 cri.go:89] found id: "bbf50eb55a9501a33ac2de73d034111945ecb64e8907c5f3016c733432c67d30"
	I1124 04:13:50.948562  472908 cri.go:89] found id: ""
	I1124 04:13:50.948614  472908 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 04:13:50.968616  472908 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:13:50Z" level=error msg="open /run/runc: no such file or directory"
	I1124 04:13:50.968721  472908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 04:13:50.994502  472908 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 04:13:50.994521  472908 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 04:13:50.994601  472908 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 04:13:51.016821  472908 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 04:13:51.017462  472908 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-762702" does not appear in /home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:13:51.017773  472908 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-289526/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-762702" cluster setting kubeconfig missing "old-k8s-version-762702" context setting]
	I1124 04:13:51.018258  472908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/kubeconfig: {Name:mkb927d54c1c5489b1562496c31c4a11f46f8c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:13:51.019853  472908 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 04:13:51.038779  472908 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1124 04:13:51.038817  472908 kubeadm.go:602] duration metric: took 44.289317ms to restartPrimaryControlPlane
	I1124 04:13:51.038849  472908 kubeadm.go:403] duration metric: took 159.536046ms to StartCluster
	I1124 04:13:51.038873  472908 settings.go:142] acquiring lock: {Name:mkbb4fe81234aed150705ba63a85f83232f71688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:13:51.038954  472908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:13:51.039946  472908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/kubeconfig: {Name:mkb927d54c1c5489b1562496c31c4a11f46f8c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:13:51.040216  472908 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 04:13:51.040540  472908 config.go:182] Loaded profile config "old-k8s-version-762702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 04:13:51.040694  472908 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 04:13:51.040959  472908 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-762702"
	I1124 04:13:51.040987  472908 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-762702"
	W1124 04:13:51.041010  472908 addons.go:248] addon storage-provisioner should already be in state true
	I1124 04:13:51.041043  472908 host.go:66] Checking if "old-k8s-version-762702" exists ...
	I1124 04:13:51.041745  472908 cli_runner.go:164] Run: docker container inspect old-k8s-version-762702 --format={{.State.Status}}
	I1124 04:13:51.042243  472908 addons.go:70] Setting dashboard=true in profile "old-k8s-version-762702"
	I1124 04:13:51.042267  472908 addons.go:239] Setting addon dashboard=true in "old-k8s-version-762702"
	W1124 04:13:51.042275  472908 addons.go:248] addon dashboard should already be in state true
	I1124 04:13:51.042302  472908 host.go:66] Checking if "old-k8s-version-762702" exists ...
	I1124 04:13:51.042732  472908 cli_runner.go:164] Run: docker container inspect old-k8s-version-762702 --format={{.State.Status}}
	I1124 04:13:51.043702  472908 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-762702"
	I1124 04:13:51.043731  472908 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-762702"
	I1124 04:13:51.044065  472908 cli_runner.go:164] Run: docker container inspect old-k8s-version-762702 --format={{.State.Status}}
	I1124 04:13:51.047077  472908 out.go:179] * Verifying Kubernetes components...
	I1124 04:13:51.050418  472908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:13:51.100872  472908 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 04:13:51.104025  472908 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:13:51.104052  472908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 04:13:51.104114  472908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:13:51.106560  472908 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 04:13:51.111852  472908 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-762702"
	W1124 04:13:51.111876  472908 addons.go:248] addon default-storageclass should already be in state true
	I1124 04:13:51.111902  472908 host.go:66] Checking if "old-k8s-version-762702" exists ...
	I1124 04:13:51.112318  472908 cli_runner.go:164] Run: docker container inspect old-k8s-version-762702 --format={{.State.Status}}
	I1124 04:13:51.119308  472908 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 04:13:51.126554  472908 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 04:13:51.126592  472908 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 04:13:51.126816  472908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:13:51.138715  472908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/old-k8s-version-762702/id_rsa Username:docker}
	I1124 04:13:51.163820  472908 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 04:13:51.163848  472908 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 04:13:51.163923  472908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:13:51.179260  472908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/old-k8s-version-762702/id_rsa Username:docker}
	I1124 04:13:51.196539  472908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/old-k8s-version-762702/id_rsa Username:docker}
	I1124 04:13:51.387802  472908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:13:51.406864  472908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:13:51.416095  472908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 04:13:51.440418  472908 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-762702" to be "Ready" ...
	I1124 04:13:51.485201  472908 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 04:13:51.485277  472908 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 04:13:51.537593  472908 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 04:13:51.537667  472908 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 04:13:51.648993  472908 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 04:13:51.649068  472908 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 04:13:51.696113  472908 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 04:13:51.696176  472908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 04:13:51.740268  472908 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 04:13:51.740339  472908 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 04:13:51.758975  472908 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 04:13:51.759046  472908 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 04:13:51.775354  472908 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 04:13:51.775431  472908 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 04:13:51.798594  472908 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 04:13:51.798666  472908 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 04:13:51.820562  472908 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 04:13:51.820642  472908 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 04:13:51.840651  472908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 04:13:55.630305  472908 node_ready.go:49] node "old-k8s-version-762702" is "Ready"
	I1124 04:13:55.630338  472908 node_ready.go:38] duration metric: took 4.189829283s for node "old-k8s-version-762702" to be "Ready" ...
	I1124 04:13:55.630352  472908 api_server.go:52] waiting for apiserver process to appear ...
	I1124 04:13:55.630414  472908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 04:13:57.300253  472908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.89330774s)
	I1124 04:13:57.300332  472908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.884169934s)
	I1124 04:13:57.736321  472908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.89558088s)
	I1124 04:13:57.736605  472908 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.106177392s)
	I1124 04:13:57.736655  472908 api_server.go:72] duration metric: took 6.696407544s to wait for apiserver process to appear ...
	I1124 04:13:57.736697  472908 api_server.go:88] waiting for apiserver healthz status ...
	I1124 04:13:57.736716  472908 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 04:13:57.739761  472908 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-762702 addons enable metrics-server
	
	I1124 04:13:57.743063  472908 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1124 04:13:57.745352  472908 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 04:13:57.746830  472908 api_server.go:141] control plane version: v1.28.0
	I1124 04:13:57.746856  472908 api_server.go:131] duration metric: took 10.15176ms to wait for apiserver health ...
	I1124 04:13:57.746866  472908 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 04:13:57.747031  472908 addons.go:530] duration metric: took 6.7063316s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1124 04:13:57.750728  472908 system_pods.go:59] 8 kube-system pods found
	I1124 04:13:57.750766  472908 system_pods.go:61] "coredns-5dd5756b68-c5hgr" [7d0b287f-b2e8-461f-abf4-71700b66caf8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:13:57.750808  472908 system_pods.go:61] "etcd-old-k8s-version-762702" [62d0f56d-8e43-47b5-baf7-2af95f42cd81] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 04:13:57.750821  472908 system_pods.go:61] "kindnet-lkhzw" [db06bd2a-7e8a-49e3-a17f-62b681f600d1] Running
	I1124 04:13:57.750828  472908 system_pods.go:61] "kube-apiserver-old-k8s-version-762702" [efc26447-b9f1-4aa7-a2b8-e2ef56674415] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 04:13:57.750835  472908 system_pods.go:61] "kube-controller-manager-old-k8s-version-762702" [9817fe2f-c899-4ef9-8e2f-c0b22566b389] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 04:13:57.750843  472908 system_pods.go:61] "kube-proxy-7ml4n" [1ed410af-141e-4197-9a5c-6900dc8e35e6] Running
	I1124 04:13:57.750850  472908 system_pods.go:61] "kube-scheduler-old-k8s-version-762702" [e1a7d08c-4e60-4f84-a997-3baef7354877] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 04:13:57.750854  472908 system_pods.go:61] "storage-provisioner" [8af39921-2789-4cc5-974a-89f0667a6e47] Running
	I1124 04:13:57.750862  472908 system_pods.go:74] duration metric: took 3.990684ms to wait for pod list to return data ...
	I1124 04:13:57.750881  472908 default_sa.go:34] waiting for default service account to be created ...
	I1124 04:13:57.753306  472908 default_sa.go:45] found service account: "default"
	I1124 04:13:57.753333  472908 default_sa.go:55] duration metric: took 2.445858ms for default service account to be created ...
	I1124 04:13:57.753343  472908 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 04:13:57.756974  472908 system_pods.go:86] 8 kube-system pods found
	I1124 04:13:57.757007  472908 system_pods.go:89] "coredns-5dd5756b68-c5hgr" [7d0b287f-b2e8-461f-abf4-71700b66caf8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:13:57.757017  472908 system_pods.go:89] "etcd-old-k8s-version-762702" [62d0f56d-8e43-47b5-baf7-2af95f42cd81] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 04:13:57.757023  472908 system_pods.go:89] "kindnet-lkhzw" [db06bd2a-7e8a-49e3-a17f-62b681f600d1] Running
	I1124 04:13:57.757031  472908 system_pods.go:89] "kube-apiserver-old-k8s-version-762702" [efc26447-b9f1-4aa7-a2b8-e2ef56674415] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 04:13:57.757038  472908 system_pods.go:89] "kube-controller-manager-old-k8s-version-762702" [9817fe2f-c899-4ef9-8e2f-c0b22566b389] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 04:13:57.757043  472908 system_pods.go:89] "kube-proxy-7ml4n" [1ed410af-141e-4197-9a5c-6900dc8e35e6] Running
	I1124 04:13:57.757053  472908 system_pods.go:89] "kube-scheduler-old-k8s-version-762702" [e1a7d08c-4e60-4f84-a997-3baef7354877] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 04:13:57.757057  472908 system_pods.go:89] "storage-provisioner" [8af39921-2789-4cc5-974a-89f0667a6e47] Running
	I1124 04:13:57.757071  472908 system_pods.go:126] duration metric: took 3.72211ms to wait for k8s-apps to be running ...
	I1124 04:13:57.757082  472908 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 04:13:57.757151  472908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:13:57.771318  472908 system_svc.go:56] duration metric: took 14.225801ms WaitForService to wait for kubelet
	I1124 04:13:57.771391  472908 kubeadm.go:587] duration metric: took 6.731141299s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 04:13:57.771425  472908 node_conditions.go:102] verifying NodePressure condition ...
	I1124 04:13:57.774417  472908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 04:13:57.774534  472908 node_conditions.go:123] node cpu capacity is 2
	I1124 04:13:57.774562  472908 node_conditions.go:105] duration metric: took 3.117786ms to run NodePressure ...
	I1124 04:13:57.774603  472908 start.go:242] waiting for startup goroutines ...
	I1124 04:13:57.774630  472908 start.go:247] waiting for cluster config update ...
	I1124 04:13:57.774656  472908 start.go:256] writing updated cluster config ...
	I1124 04:13:57.774998  472908 ssh_runner.go:195] Run: rm -f paused
	I1124 04:13:57.779299  472908 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 04:13:57.784445  472908 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-c5hgr" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 04:13:59.791186  472908 pod_ready.go:104] pod "coredns-5dd5756b68-c5hgr" is not "Ready", error: <nil>
	W1124 04:14:02.290635  472908 pod_ready.go:104] pod "coredns-5dd5756b68-c5hgr" is not "Ready", error: <nil>
	W1124 04:14:04.790138  472908 pod_ready.go:104] pod "coredns-5dd5756b68-c5hgr" is not "Ready", error: <nil>
	W1124 04:14:06.790580  472908 pod_ready.go:104] pod "coredns-5dd5756b68-c5hgr" is not "Ready", error: <nil>
	W1124 04:14:08.791287  472908 pod_ready.go:104] pod "coredns-5dd5756b68-c5hgr" is not "Ready", error: <nil>
	W1124 04:14:11.289930  472908 pod_ready.go:104] pod "coredns-5dd5756b68-c5hgr" is not "Ready", error: <nil>
	W1124 04:14:13.290809  472908 pod_ready.go:104] pod "coredns-5dd5756b68-c5hgr" is not "Ready", error: <nil>
	W1124 04:14:15.792123  472908 pod_ready.go:104] pod "coredns-5dd5756b68-c5hgr" is not "Ready", error: <nil>
	W1124 04:14:18.290640  472908 pod_ready.go:104] pod "coredns-5dd5756b68-c5hgr" is not "Ready", error: <nil>
	W1124 04:14:20.791667  472908 pod_ready.go:104] pod "coredns-5dd5756b68-c5hgr" is not "Ready", error: <nil>
	W1124 04:14:23.290285  472908 pod_ready.go:104] pod "coredns-5dd5756b68-c5hgr" is not "Ready", error: <nil>
	W1124 04:14:25.291478  472908 pod_ready.go:104] pod "coredns-5dd5756b68-c5hgr" is not "Ready", error: <nil>
	W1124 04:14:27.790390  472908 pod_ready.go:104] pod "coredns-5dd5756b68-c5hgr" is not "Ready", error: <nil>
	W1124 04:14:29.790511  472908 pod_ready.go:104] pod "coredns-5dd5756b68-c5hgr" is not "Ready", error: <nil>
	I1124 04:14:31.290925  472908 pod_ready.go:94] pod "coredns-5dd5756b68-c5hgr" is "Ready"
	I1124 04:14:31.290954  472908 pod_ready.go:86] duration metric: took 33.506443249s for pod "coredns-5dd5756b68-c5hgr" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:14:31.294343  472908 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-762702" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:14:31.299377  472908 pod_ready.go:94] pod "etcd-old-k8s-version-762702" is "Ready"
	I1124 04:14:31.299401  472908 pod_ready.go:86] duration metric: took 5.033101ms for pod "etcd-old-k8s-version-762702" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:14:31.302665  472908 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-762702" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:14:31.307382  472908 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-762702" is "Ready"
	I1124 04:14:31.307409  472908 pod_ready.go:86] duration metric: took 4.719234ms for pod "kube-apiserver-old-k8s-version-762702" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:14:31.310352  472908 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-762702" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:14:31.488811  472908 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-762702" is "Ready"
	I1124 04:14:31.488844  472908 pod_ready.go:86] duration metric: took 178.463686ms for pod "kube-controller-manager-old-k8s-version-762702" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:14:31.689744  472908 pod_ready.go:83] waiting for pod "kube-proxy-7ml4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:14:32.088499  472908 pod_ready.go:94] pod "kube-proxy-7ml4n" is "Ready"
	I1124 04:14:32.088527  472908 pod_ready.go:86] duration metric: took 398.75422ms for pod "kube-proxy-7ml4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:14:32.289546  472908 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-762702" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:14:32.689521  472908 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-762702" is "Ready"
	I1124 04:14:32.689553  472908 pod_ready.go:86] duration metric: took 399.979638ms for pod "kube-scheduler-old-k8s-version-762702" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:14:32.689567  472908 pod_ready.go:40] duration metric: took 34.91019925s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 04:14:32.752268  472908 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1124 04:14:32.755445  472908 out.go:203] 
	W1124 04:14:32.758392  472908 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 04:14:32.761317  472908 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 04:14:32.764242  472908 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-762702" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 04:14:28 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:28.125816174Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=449f44cd-c399-4440-9f80-aaf585b4d864 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:14:28 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:28.12736584Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b00e97b7-f749-4690-8464-9c5f99bb9fc0 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:14:28 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:28.128494125Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6zmfl/dashboard-metrics-scraper" id=bd021e87-f10a-42c3-89fd-ed9140d9fa4e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:14:28 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:28.128737813Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:14:28 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:28.142137139Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:14:28 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:28.142784032Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:14:28 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:28.159664077Z" level=info msg="Created container b89b8f495cfe78a41475c2ab6476b4f7445f50c81d516ab1dd5a3fd23f6c3420: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6zmfl/dashboard-metrics-scraper" id=bd021e87-f10a-42c3-89fd-ed9140d9fa4e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:14:28 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:28.160696345Z" level=info msg="Starting container: b89b8f495cfe78a41475c2ab6476b4f7445f50c81d516ab1dd5a3fd23f6c3420" id=d1df45a6-3a3b-4a9c-a5f4-3b44dbb34b3d name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:14:28 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:28.162367301Z" level=info msg="Started container" PID=1680 containerID=b89b8f495cfe78a41475c2ab6476b4f7445f50c81d516ab1dd5a3fd23f6c3420 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6zmfl/dashboard-metrics-scraper id=d1df45a6-3a3b-4a9c-a5f4-3b44dbb34b3d name=/runtime.v1.RuntimeService/StartContainer sandboxID=8d6d700617ea5c64410d1f3a1d752c15f6fb793be2a37757627990a7d0533bfd
	Nov 24 04:14:28 old-k8s-version-762702 conmon[1678]: conmon b89b8f495cfe78a41475 <ninfo>: container 1680 exited with status 1
	Nov 24 04:14:28 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:28.494947121Z" level=info msg="Removing container: 7269e4ab74c239334a8c02dde1ef62edd1a14e60113f1a1f9297857da30b33dd" id=26bce0fa-3c76-4133-82ef-c3c9804621ad name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 04:14:28 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:28.506137835Z" level=info msg="Error loading conmon cgroup of container 7269e4ab74c239334a8c02dde1ef62edd1a14e60113f1a1f9297857da30b33dd: cgroup deleted" id=26bce0fa-3c76-4133-82ef-c3c9804621ad name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 04:14:28 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:28.509546479Z" level=info msg="Removed container 7269e4ab74c239334a8c02dde1ef62edd1a14e60113f1a1f9297857da30b33dd: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6zmfl/dashboard-metrics-scraper" id=26bce0fa-3c76-4133-82ef-c3c9804621ad name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 04:14:37 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:37.031207948Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:14:37 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:37.038976995Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:14:37 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:37.039024651Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:14:37 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:37.03904843Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:14:37 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:37.043280101Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:14:37 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:37.043313348Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:14:37 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:37.043336184Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:14:37 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:37.047818656Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:14:37 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:37.047853799Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:14:37 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:37.047879202Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:14:37 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:37.052231859Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:14:37 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:37.05226715Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	b89b8f495cfe7       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   8d6d700617ea5       dashboard-metrics-scraper-5f989dc9cf-6zmfl       kubernetes-dashboard
	6d5b65108d13f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago      Running             storage-provisioner         2                   f7905f1f7d24c       storage-provisioner                              kube-system
	0b878cc21f861       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   34 seconds ago      Running             kubernetes-dashboard        0                   dada1b8525d12       kubernetes-dashboard-8694d4445c-tzxjs            kubernetes-dashboard
	e5ab4162a62ac       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago      Running             busybox                     1                   6f576ae5c172b       busybox                                          default
	7a9b8cf11ce99       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           51 seconds ago      Running             coredns                     1                   89ca73cf383f0       coredns-5dd5756b68-c5hgr                         kube-system
	d08b3a0aabdbd       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           51 seconds ago      Running             kube-proxy                  1                   b8beb494f1517       kube-proxy-7ml4n                                 kube-system
	0baddab83fc97       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago      Exited              storage-provisioner         1                   f7905f1f7d24c       storage-provisioner                              kube-system
	8cd173cea9a6e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago      Running             kindnet-cni                 1                   717150b3ececc       kindnet-lkhzw                                    kube-system
	3dda3e0132288       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           56 seconds ago      Running             kube-scheduler              1                   257f177ad5f4c       kube-scheduler-old-k8s-version-762702            kube-system
	b120202a1fd97       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           56 seconds ago      Running             etcd                        1                   f84c463b326d3       etcd-old-k8s-version-762702                      kube-system
	54e9d746b3ca2       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           56 seconds ago      Running             kube-apiserver              1                   adcb8d1402134       kube-apiserver-old-k8s-version-762702            kube-system
	bbf50eb55a950       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           56 seconds ago      Running             kube-controller-manager     1                   078830a18384b       kube-controller-manager-old-k8s-version-762702   kube-system
	
	
	==> coredns [7a9b8cf11ce99604979c137a579bdbc8e2fadf7960914a459db24120e33d0076] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35998 - 17945 "HINFO IN 1242135383228280311.8938716767801997800. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033415739s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-762702
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-762702
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=old-k8s-version-762702
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T04_12_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 04:12:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-762702
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 04:14:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 04:14:26 +0000   Mon, 24 Nov 2025 04:12:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 04:14:26 +0000   Mon, 24 Nov 2025 04:12:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 04:14:26 +0000   Mon, 24 Nov 2025 04:12:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 04:14:26 +0000   Mon, 24 Nov 2025 04:13:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-762702
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 304a86241bf1bbb85bd31db5692386d7
	  System UUID:                1df4042d-4e31-477e-85db-12513191744f
	  Boot ID:                    e6ca431c-3a35-478f-87f6-f49cc4bc8a65
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-5dd5756b68-c5hgr                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     108s
	  kube-system                 etcd-old-k8s-version-762702                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m
	  kube-system                 kindnet-lkhzw                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-old-k8s-version-762702             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-old-k8s-version-762702    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-7ml4n                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-old-k8s-version-762702             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-6zmfl        0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-tzxjs             0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-762702 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-762702 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-762702 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m1s                 kubelet          Node old-k8s-version-762702 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m1s                 kubelet          Node old-k8s-version-762702 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s                 kubelet          Node old-k8s-version-762702 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                 node-controller  Node old-k8s-version-762702 event: Registered Node old-k8s-version-762702 in Controller
	  Normal  NodeReady                94s                  kubelet          Node old-k8s-version-762702 status is now: NodeReady
	  Normal  Starting                 58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)    kubelet          Node old-k8s-version-762702 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)    kubelet          Node old-k8s-version-762702 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)    kubelet          Node old-k8s-version-762702 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                  node-controller  Node old-k8s-version-762702 event: Registered Node old-k8s-version-762702 in Controller
	
	
	==> dmesg <==
	[Nov24 03:46] overlayfs: idmapped layers are currently not supported
	[Nov24 03:51] overlayfs: idmapped layers are currently not supported
	[ +32.185990] overlayfs: idmapped layers are currently not supported
	[Nov24 03:52] overlayfs: idmapped layers are currently not supported
	[Nov24 03:54] overlayfs: idmapped layers are currently not supported
	[Nov24 03:55] overlayfs: idmapped layers are currently not supported
	[Nov24 03:56] overlayfs: idmapped layers are currently not supported
	[Nov24 03:57] overlayfs: idmapped layers are currently not supported
	[  +3.077077] overlayfs: idmapped layers are currently not supported
	[Nov24 03:58] overlayfs: idmapped layers are currently not supported
	[ +17.825126] overlayfs: idmapped layers are currently not supported
	[Nov24 03:59] overlayfs: idmapped layers are currently not supported
	[ +28.164324] overlayfs: idmapped layers are currently not supported
	[Nov24 04:01] overlayfs: idmapped layers are currently not supported
	[Nov24 04:02] overlayfs: idmapped layers are currently not supported
	[Nov24 04:04] overlayfs: idmapped layers are currently not supported
	[Nov24 04:05] overlayfs: idmapped layers are currently not supported
	[Nov24 04:06] overlayfs: idmapped layers are currently not supported
	[Nov24 04:08] overlayfs: idmapped layers are currently not supported
	[Nov24 04:10] overlayfs: idmapped layers are currently not supported
	[Nov24 04:11] overlayfs: idmapped layers are currently not supported
	[ +23.918932] overlayfs: idmapped layers are currently not supported
	[Nov24 04:12] overlayfs: idmapped layers are currently not supported
	[ +35.202347] overlayfs: idmapped layers are currently not supported
	[Nov24 04:13] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b120202a1fd97058a5aedf9f2bb21f0de530aaeecb2a7185c93067ac1ee7214d] <==
	{"level":"info","ts":"2025-11-24T04:13:51.111175Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-24T04:13:51.111215Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-24T04:13:51.111461Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-24T04:13:51.111583Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-24T04:13:51.111808Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T04:13:51.111886Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T04:13:51.208386Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-24T04:13:51.208624Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-24T04:13:51.208655Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-24T04:13:51.208752Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-24T04:13:51.20876Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-24T04:13:52.254511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-24T04:13:52.254625Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-24T04:13:52.254684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-24T04:13:52.254724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-24T04:13:52.25477Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-24T04:13:52.254812Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-24T04:13:52.254851Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-24T04:13:52.25714Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-762702 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-24T04:13:52.257336Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T04:13:52.257853Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-24T04:13:52.257949Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-24T04:13:52.258012Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T04:13:52.259507Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-24T04:13:52.26093Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 04:14:48 up  2:56,  0 user,  load average: 1.40, 2.76, 2.55
	Linux old-k8s-version-762702 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8cd173cea9a6e0c8848501a56105eae2eb7845d3ad6d5d080437ff7aea8df499] <==
	I1124 04:13:56.839412       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 04:13:56.840212       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 04:13:56.840345       1 main.go:148] setting mtu 1500 for CNI 
	I1124 04:13:56.840358       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 04:13:56.840369       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T04:13:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 04:13:57.027056       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 04:13:57.027204       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 04:13:57.027241       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 04:13:57.027860       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 04:14:27.027666       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1124 04:14:27.027683       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 04:14:27.027801       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 04:14:27.029021       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1124 04:14:28.527409       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 04:14:28.527443       1 metrics.go:72] Registering metrics
	I1124 04:14:28.527511       1 controller.go:711] "Syncing nftables rules"
	I1124 04:14:37.030855       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 04:14:37.030921       1 main.go:301] handling current node
	I1124 04:14:47.033426       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 04:14:47.033481       1 main.go:301] handling current node
	
	
	==> kube-apiserver [54e9d746b3ca2739d8be883f3078b9d3c9c03574f0b6d7975d0cec75f406d75d] <==
	I1124 04:13:55.664723       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1124 04:13:55.664947       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1124 04:13:55.677578       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 04:13:55.724955       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1124 04:13:55.743812       1 shared_informer.go:318] Caches are synced for configmaps
	I1124 04:13:55.746146       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1124 04:13:55.746190       1 aggregator.go:166] initial CRD sync complete...
	I1124 04:13:55.746198       1 autoregister_controller.go:141] Starting autoregister controller
	I1124 04:13:55.746205       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 04:13:55.746221       1 cache.go:39] Caches are synced for autoregister controller
	I1124 04:13:55.762901       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1124 04:13:55.765918       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 04:13:55.774916       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E1124 04:13:55.834993       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 04:13:56.313448       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 04:13:57.552281       1 controller.go:624] quota admission added evaluator for: namespaces
	I1124 04:13:57.601826       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1124 04:13:57.630638       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 04:13:57.641714       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 04:13:57.653722       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1124 04:13:57.709973       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.154.195"}
	I1124 04:13:57.729512       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.174.142"}
	I1124 04:14:07.675906       1 controller.go:624] quota admission added evaluator for: endpoints
	I1124 04:14:07.712456       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 04:14:07.737598       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [bbf50eb55a9501a33ac2de73d034111945ecb64e8907c5f3016c733432c67d30] <==
	I1124 04:14:07.761083       1 shared_informer.go:318] Caches are synced for persistent volume
	I1124 04:14:07.794189       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-tzxjs"
	I1124 04:14:07.794335       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-6zmfl"
	I1124 04:14:07.807725       1 shared_informer.go:318] Caches are synced for disruption
	I1124 04:14:07.823025       1 shared_informer.go:318] Caches are synced for resource quota
	I1124 04:14:07.839135       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="84.261167ms"
	I1124 04:14:07.839905       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="93.074545ms"
	I1124 04:14:07.872849       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="32.898586ms"
	I1124 04:14:07.872920       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="33.47µs"
	I1124 04:14:07.872964       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="33.788755ms"
	I1124 04:14:07.873005       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="22.089µs"
	I1124 04:14:07.882837       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.572µs"
	I1124 04:14:07.887332       1 shared_informer.go:318] Caches are synced for resource quota
	I1124 04:14:08.243139       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 04:14:08.243173       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1124 04:14:08.244544       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 04:14:13.491336       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="24.790463ms"
	I1124 04:14:13.491655       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="67.84µs"
	I1124 04:14:17.478732       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="57.019µs"
	I1124 04:14:18.489401       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.476µs"
	I1124 04:14:19.481369       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.22µs"
	I1124 04:14:29.512108       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="100.588µs"
	I1124 04:14:31.114622       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.338605ms"
	I1124 04:14:31.115179       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="53.498µs"
	I1124 04:14:38.139824       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="99.284µs"
	
	
	==> kube-proxy [d08b3a0aabdbd70340b4745beb5f9d34a57c7e1f07a3837a7b5ed36377e70cff] <==
	I1124 04:13:57.181097       1 server_others.go:69] "Using iptables proxy"
	I1124 04:13:57.196742       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1124 04:13:57.323568       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 04:13:57.328248       1 server_others.go:152] "Using iptables Proxier"
	I1124 04:13:57.328348       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1124 04:13:57.328393       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1124 04:13:57.328457       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1124 04:13:57.328946       1 server.go:846] "Version info" version="v1.28.0"
	I1124 04:13:57.328998       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:13:57.329778       1 config.go:188] "Starting service config controller"
	I1124 04:13:57.329883       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1124 04:13:57.329936       1 config.go:97] "Starting endpoint slice config controller"
	I1124 04:13:57.329968       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1124 04:13:57.330448       1 config.go:315] "Starting node config controller"
	I1124 04:13:57.332259       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1124 04:13:57.430187       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1124 04:13:57.430242       1 shared_informer.go:318] Caches are synced for service config
	I1124 04:13:57.433840       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [3dda3e01322889ba6ae662cf36d250b792923223d15823ac24db5c9c42c3272c] <==
	I1124 04:13:53.404150       1 serving.go:348] Generated self-signed cert in-memory
	W1124 04:13:55.622899       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 04:13:55.623030       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 04:13:55.623071       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 04:13:55.623104       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 04:13:55.729690       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1124 04:13:55.729798       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:13:55.731897       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1124 04:13:55.732036       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1124 04:13:55.732172       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 04:13:55.744982       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1124 04:13:55.846339       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 24 04:14:07 old-k8s-version-762702 kubelet[794]: I1124 04:14:07.818481     794 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xrpr\" (UniqueName: \"kubernetes.io/projected/6cadf852-5cc2-4f08-ad93-6d8f2962ce1e-kube-api-access-6xrpr\") pod \"kubernetes-dashboard-8694d4445c-tzxjs\" (UID: \"6cadf852-5cc2-4f08-ad93-6d8f2962ce1e\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-tzxjs"
	Nov 24 04:14:07 old-k8s-version-762702 kubelet[794]: I1124 04:14:07.818691     794 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6cadf852-5cc2-4f08-ad93-6d8f2962ce1e-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-tzxjs\" (UID: \"6cadf852-5cc2-4f08-ad93-6d8f2962ce1e\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-tzxjs"
	Nov 24 04:14:07 old-k8s-version-762702 kubelet[794]: I1124 04:14:07.822045     794 topology_manager.go:215] "Topology Admit Handler" podUID="614839c1-1d2d-4342-93dc-0cae816d580f" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-6zmfl"
	Nov 24 04:14:07 old-k8s-version-762702 kubelet[794]: I1124 04:14:07.919514     794 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/614839c1-1d2d-4342-93dc-0cae816d580f-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-6zmfl\" (UID: \"614839c1-1d2d-4342-93dc-0cae816d580f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6zmfl"
	Nov 24 04:14:07 old-k8s-version-762702 kubelet[794]: I1124 04:14:07.919573     794 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmd86\" (UniqueName: \"kubernetes.io/projected/614839c1-1d2d-4342-93dc-0cae816d580f-kube-api-access-bmd86\") pod \"dashboard-metrics-scraper-5f989dc9cf-6zmfl\" (UID: \"614839c1-1d2d-4342-93dc-0cae816d580f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6zmfl"
	Nov 24 04:14:08 old-k8s-version-762702 kubelet[794]: W1124 04:14:08.150682     794 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/b9dfaaddc60d2719aecb1147663224db018847e7dac0caa2182ee474b875f47a/crio-dada1b8525d12ffd79de05bbcf039d2367a00aa34e983585529f0b9aa9ae09bc WatchSource:0}: Error finding container dada1b8525d12ffd79de05bbcf039d2367a00aa34e983585529f0b9aa9ae09bc: Status 404 returned error can't find the container with id dada1b8525d12ffd79de05bbcf039d2367a00aa34e983585529f0b9aa9ae09bc
	Nov 24 04:14:08 old-k8s-version-762702 kubelet[794]: W1124 04:14:08.170867     794 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/b9dfaaddc60d2719aecb1147663224db018847e7dac0caa2182ee474b875f47a/crio-8d6d700617ea5c64410d1f3a1d752c15f6fb793be2a37757627990a7d0533bfd WatchSource:0}: Error finding container 8d6d700617ea5c64410d1f3a1d752c15f6fb793be2a37757627990a7d0533bfd: Status 404 returned error can't find the container with id 8d6d700617ea5c64410d1f3a1d752c15f6fb793be2a37757627990a7d0533bfd
	Nov 24 04:14:17 old-k8s-version-762702 kubelet[794]: I1124 04:14:17.458755     794 scope.go:117] "RemoveContainer" containerID="9adb6b17185b6682e8c2c8dd1a99c7a053f8b55e59d713d66db3fa31374bb565"
	Nov 24 04:14:17 old-k8s-version-762702 kubelet[794]: I1124 04:14:17.476696     794 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-tzxjs" podStartSLOduration=5.762431201 podCreationTimestamp="2025-11-24 04:14:07 +0000 UTC" firstStartedPulling="2025-11-24 04:14:08.158762705 +0000 UTC m=+18.058497948" lastFinishedPulling="2025-11-24 04:14:12.872966972 +0000 UTC m=+22.772702215" observedRunningTime="2025-11-24 04:14:13.467003062 +0000 UTC m=+23.366738337" watchObservedRunningTime="2025-11-24 04:14:17.476635468 +0000 UTC m=+27.376370719"
	Nov 24 04:14:18 old-k8s-version-762702 kubelet[794]: I1124 04:14:18.462737     794 scope.go:117] "RemoveContainer" containerID="9adb6b17185b6682e8c2c8dd1a99c7a053f8b55e59d713d66db3fa31374bb565"
	Nov 24 04:14:18 old-k8s-version-762702 kubelet[794]: I1124 04:14:18.463124     794 scope.go:117] "RemoveContainer" containerID="7269e4ab74c239334a8c02dde1ef62edd1a14e60113f1a1f9297857da30b33dd"
	Nov 24 04:14:18 old-k8s-version-762702 kubelet[794]: E1124 04:14:18.464131     794 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-6zmfl_kubernetes-dashboard(614839c1-1d2d-4342-93dc-0cae816d580f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6zmfl" podUID="614839c1-1d2d-4342-93dc-0cae816d580f"
	Nov 24 04:14:19 old-k8s-version-762702 kubelet[794]: I1124 04:14:19.467050     794 scope.go:117] "RemoveContainer" containerID="7269e4ab74c239334a8c02dde1ef62edd1a14e60113f1a1f9297857da30b33dd"
	Nov 24 04:14:19 old-k8s-version-762702 kubelet[794]: E1124 04:14:19.467329     794 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-6zmfl_kubernetes-dashboard(614839c1-1d2d-4342-93dc-0cae816d580f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6zmfl" podUID="614839c1-1d2d-4342-93dc-0cae816d580f"
	Nov 24 04:14:27 old-k8s-version-762702 kubelet[794]: I1124 04:14:27.485415     794 scope.go:117] "RemoveContainer" containerID="0baddab83fc97881a782442c419e631b24a2e0920b5bbc40571b6ca47409b609"
	Nov 24 04:14:28 old-k8s-version-762702 kubelet[794]: I1124 04:14:28.125139     794 scope.go:117] "RemoveContainer" containerID="7269e4ab74c239334a8c02dde1ef62edd1a14e60113f1a1f9297857da30b33dd"
	Nov 24 04:14:28 old-k8s-version-762702 kubelet[794]: I1124 04:14:28.493269     794 scope.go:117] "RemoveContainer" containerID="7269e4ab74c239334a8c02dde1ef62edd1a14e60113f1a1f9297857da30b33dd"
	Nov 24 04:14:29 old-k8s-version-762702 kubelet[794]: I1124 04:14:29.497384     794 scope.go:117] "RemoveContainer" containerID="b89b8f495cfe78a41475c2ab6476b4f7445f50c81d516ab1dd5a3fd23f6c3420"
	Nov 24 04:14:29 old-k8s-version-762702 kubelet[794]: E1124 04:14:29.497679     794 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-6zmfl_kubernetes-dashboard(614839c1-1d2d-4342-93dc-0cae816d580f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6zmfl" podUID="614839c1-1d2d-4342-93dc-0cae816d580f"
	Nov 24 04:14:38 old-k8s-version-762702 kubelet[794]: I1124 04:14:38.125194     794 scope.go:117] "RemoveContainer" containerID="b89b8f495cfe78a41475c2ab6476b4f7445f50c81d516ab1dd5a3fd23f6c3420"
	Nov 24 04:14:38 old-k8s-version-762702 kubelet[794]: E1124 04:14:38.125503     794 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-6zmfl_kubernetes-dashboard(614839c1-1d2d-4342-93dc-0cae816d580f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6zmfl" podUID="614839c1-1d2d-4342-93dc-0cae816d580f"
	Nov 24 04:14:45 old-k8s-version-762702 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 04:14:45 old-k8s-version-762702 kubelet[794]: I1124 04:14:45.209658     794 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 24 04:14:45 old-k8s-version-762702 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 04:14:45 old-k8s-version-762702 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [0b878cc21f861b10f2b465a433c6552197db070682805f4c2674b0aa81bf3844] <==
	2025/11/24 04:14:12 Using namespace: kubernetes-dashboard
	2025/11/24 04:14:12 Using in-cluster config to connect to apiserver
	2025/11/24 04:14:12 Using secret token for csrf signing
	2025/11/24 04:14:12 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 04:14:12 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 04:14:12 Successful initial request to the apiserver, version: v1.28.0
	2025/11/24 04:14:12 Generating JWE encryption key
	2025/11/24 04:14:12 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 04:14:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 04:14:13 Initializing JWE encryption key from synchronized object
	2025/11/24 04:14:13 Creating in-cluster Sidecar client
	2025/11/24 04:14:13 Serving insecurely on HTTP port: 9090
	2025/11/24 04:14:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 04:14:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 04:14:12 Starting overwatch
	
	
	==> storage-provisioner [0baddab83fc97881a782442c419e631b24a2e0920b5bbc40571b6ca47409b609] <==
	I1124 04:13:56.905568       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 04:14:26.927961       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [6d5b65108d13f59e0a0172242b6877e1aa183242ced52c6bb7d03102dd8bc068] <==
	I1124 04:14:27.534089       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 04:14:27.550122       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 04:14:27.550183       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1124 04:14:44.949000       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 04:14:44.949276       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-762702_530938c6-ad9a-4601-8127-df869a61a610!
	I1124 04:14:44.950573       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cfe1276a-d502-46a4-811c-2d6200e130b0", APIVersion:"v1", ResourceVersion:"659", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-762702_530938c6-ad9a-4601-8127-df869a61a610 became leader
	I1124 04:14:45.054387       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-762702_530938c6-ad9a-4601-8127-df869a61a610!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-762702 -n old-k8s-version-762702
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-762702 -n old-k8s-version-762702: exit status 2 (372.866258ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-762702 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-762702
helpers_test.go:243: (dbg) docker inspect old-k8s-version-762702:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b9dfaaddc60d2719aecb1147663224db018847e7dac0caa2182ee474b875f47a",
	        "Created": "2025-11-24T04:12:22.608705618Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 473035,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T04:13:43.165474035Z",
	            "FinishedAt": "2025-11-24T04:13:42.309924062Z"
	        },
	        "Image": "sha256:fbb44bc62521f331457dff002aaa5e1e27856f9e53853b3b3ee62969be454028",
	        "ResolvConfPath": "/var/lib/docker/containers/b9dfaaddc60d2719aecb1147663224db018847e7dac0caa2182ee474b875f47a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b9dfaaddc60d2719aecb1147663224db018847e7dac0caa2182ee474b875f47a/hostname",
	        "HostsPath": "/var/lib/docker/containers/b9dfaaddc60d2719aecb1147663224db018847e7dac0caa2182ee474b875f47a/hosts",
	        "LogPath": "/var/lib/docker/containers/b9dfaaddc60d2719aecb1147663224db018847e7dac0caa2182ee474b875f47a/b9dfaaddc60d2719aecb1147663224db018847e7dac0caa2182ee474b875f47a-json.log",
	        "Name": "/old-k8s-version-762702",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-762702:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-762702",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b9dfaaddc60d2719aecb1147663224db018847e7dac0caa2182ee474b875f47a",
	                "LowerDir": "/var/lib/docker/overlay2/653c33f0be4a366cb5cc86ca2501e9ef033df8c8abee4cc8bc2eca215ba11542-init/diff:/var/lib/docker/overlay2/d0aa28be488ed1454a92ae6ce1d851cbc3e880fd8a322d63788b1835a346ec13/diff",
	                "MergedDir": "/var/lib/docker/overlay2/653c33f0be4a366cb5cc86ca2501e9ef033df8c8abee4cc8bc2eca215ba11542/merged",
	                "UpperDir": "/var/lib/docker/overlay2/653c33f0be4a366cb5cc86ca2501e9ef033df8c8abee4cc8bc2eca215ba11542/diff",
	                "WorkDir": "/var/lib/docker/overlay2/653c33f0be4a366cb5cc86ca2501e9ef033df8c8abee4cc8bc2eca215ba11542/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-762702",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-762702/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-762702",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-762702",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-762702",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "099dd3f6b3c1284a58643ae9891d5cfc2f89029daa0e8b273f37a5c3d01e7f9c",
	            "SandboxKey": "/var/run/docker/netns/099dd3f6b3c1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-762702": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:d8:b1:8b:0f:7c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2839db71c04bcd656cafcc00851680b9c7cc53726d05c9804df0e7524d958ffa",
	                    "EndpointID": "d6bb48f3a820359570e5a65cd4df6e36300a2acdddb5a87c453e99d869ab94bb",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-762702",
	                        "b9dfaaddc60d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-762702 -n old-k8s-version-762702
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-762702 -n old-k8s-version-762702: exit status 2 (363.21117ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-762702 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-762702 logs -n 25: (1.298517008s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-778509 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo containerd config dump                                                                                                                                                                                                  │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo crio config                                                                                                                                                                                                             │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ delete  │ -p cilium-778509                                                                                                                                                                                                                              │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │ 24 Nov 25 04:10 UTC │
	│ start   │ -p force-systemd-env-400958 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-400958  │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │ 24 Nov 25 04:11 UTC │
	│ delete  │ -p kubernetes-upgrade-207884                                                                                                                                                                                                                  │ kubernetes-upgrade-207884 │ jenkins │ v1.37.0 │ 24 Nov 25 04:11 UTC │ 24 Nov 25 04:11 UTC │
	│ start   │ -p cert-expiration-918798 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-918798    │ jenkins │ v1.37.0 │ 24 Nov 25 04:11 UTC │ 24 Nov 25 04:11 UTC │
	│ delete  │ -p force-systemd-env-400958                                                                                                                                                                                                                   │ force-systemd-env-400958  │ jenkins │ v1.37.0 │ 24 Nov 25 04:11 UTC │ 24 Nov 25 04:11 UTC │
	│ start   │ -p cert-options-967682 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-967682       │ jenkins │ v1.37.0 │ 24 Nov 25 04:11 UTC │ 24 Nov 25 04:12 UTC │
	│ ssh     │ cert-options-967682 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-967682       │ jenkins │ v1.37.0 │ 24 Nov 25 04:12 UTC │ 24 Nov 25 04:12 UTC │
	│ ssh     │ -p cert-options-967682 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-967682       │ jenkins │ v1.37.0 │ 24 Nov 25 04:12 UTC │ 24 Nov 25 04:12 UTC │
	│ delete  │ -p cert-options-967682                                                                                                                                                                                                                        │ cert-options-967682       │ jenkins │ v1.37.0 │ 24 Nov 25 04:12 UTC │ 24 Nov 25 04:12 UTC │
	│ start   │ -p old-k8s-version-762702 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:12 UTC │ 24 Nov 25 04:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-762702 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:13 UTC │                     │
	│ stop    │ -p old-k8s-version-762702 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:13 UTC │ 24 Nov 25 04:13 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-762702 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:13 UTC │ 24 Nov 25 04:13 UTC │
	│ start   │ -p old-k8s-version-762702 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:13 UTC │ 24 Nov 25 04:14 UTC │
	│ image   │ old-k8s-version-762702 image list --format=json                                                                                                                                                                                               │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:14 UTC │
	│ pause   │ -p old-k8s-version-762702 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 04:13:42
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 04:13:42.860676  472908 out.go:360] Setting OutFile to fd 1 ...
	I1124 04:13:42.860798  472908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:13:42.860809  472908 out.go:374] Setting ErrFile to fd 2...
	I1124 04:13:42.860815  472908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:13:42.861095  472908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 04:13:42.861494  472908 out.go:368] Setting JSON to false
	I1124 04:13:42.862408  472908 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10552,"bootTime":1763947071,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 04:13:42.862562  472908 start.go:143] virtualization:  
	I1124 04:13:42.867689  472908 out.go:179] * [old-k8s-version-762702] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 04:13:42.870750  472908 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 04:13:42.870900  472908 notify.go:221] Checking for updates...
	I1124 04:13:42.877144  472908 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 04:13:42.880228  472908 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:13:42.883588  472908 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	I1124 04:13:42.888507  472908 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 04:13:42.891557  472908 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 04:13:42.895126  472908 config.go:182] Loaded profile config "old-k8s-version-762702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 04:13:42.898794  472908 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1124 04:13:42.901560  472908 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 04:13:42.938973  472908 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 04:13:42.939113  472908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:13:43.014344  472908 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 04:13:43.002806559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:13:43.014490  472908 docker.go:319] overlay module found
	I1124 04:13:43.017722  472908 out.go:179] * Using the docker driver based on existing profile
	I1124 04:13:43.020610  472908 start.go:309] selected driver: docker
	I1124 04:13:43.020641  472908 start.go:927] validating driver "docker" against &{Name:old-k8s-version-762702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-762702 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:13:43.020754  472908 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 04:13:43.021487  472908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:13:43.078400  472908 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 04:13:43.068062515 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:13:43.078968  472908 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 04:13:43.079000  472908 cni.go:84] Creating CNI manager for ""
	I1124 04:13:43.079063  472908 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:13:43.079103  472908 start.go:353] cluster config:
	{Name:old-k8s-version-762702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-762702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:13:43.082330  472908 out.go:179] * Starting "old-k8s-version-762702" primary control-plane node in "old-k8s-version-762702" cluster
	I1124 04:13:43.085167  472908 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 04:13:43.088115  472908 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 04:13:43.090952  472908 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 04:13:43.091002  472908 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1124 04:13:43.091016  472908 cache.go:65] Caching tarball of preloaded images
	I1124 04:13:43.091027  472908 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 04:13:43.091110  472908 preload.go:238] Found /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 04:13:43.091121  472908 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1124 04:13:43.091237  472908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/config.json ...
	I1124 04:13:43.111965  472908 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 04:13:43.111986  472908 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 04:13:43.112007  472908 cache.go:243] Successfully downloaded all kic artifacts
	I1124 04:13:43.112037  472908 start.go:360] acquireMachinesLock for old-k8s-version-762702: {Name:mk39e7bd6d63be24b0c5297d3d6b80f2dd18eb45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 04:13:43.112101  472908 start.go:364] duration metric: took 39.098µs to acquireMachinesLock for "old-k8s-version-762702"
	I1124 04:13:43.112124  472908 start.go:96] Skipping create...Using existing machine configuration
	I1124 04:13:43.112131  472908 fix.go:54] fixHost starting: 
	I1124 04:13:43.112398  472908 cli_runner.go:164] Run: docker container inspect old-k8s-version-762702 --format={{.State.Status}}
	I1124 04:13:43.129439  472908 fix.go:112] recreateIfNeeded on old-k8s-version-762702: state=Stopped err=<nil>
	W1124 04:13:43.129468  472908 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 04:13:43.132773  472908 out.go:252] * Restarting existing docker container for "old-k8s-version-762702" ...
	I1124 04:13:43.132882  472908 cli_runner.go:164] Run: docker start old-k8s-version-762702
	I1124 04:13:43.401012  472908 cli_runner.go:164] Run: docker container inspect old-k8s-version-762702 --format={{.State.Status}}
	I1124 04:13:43.423905  472908 kic.go:430] container "old-k8s-version-762702" state is running.
	I1124 04:13:43.424317  472908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-762702
	I1124 04:13:43.448402  472908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/config.json ...
	I1124 04:13:43.448635  472908 machine.go:94] provisionDockerMachine start ...
	I1124 04:13:43.448695  472908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:13:43.473422  472908 main.go:143] libmachine: Using SSH client type: native
	I1124 04:13:43.473766  472908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33426 <nil> <nil>}
	I1124 04:13:43.473782  472908 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 04:13:43.474483  472908 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 04:13:46.622216  472908 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-762702
	
	I1124 04:13:46.622239  472908 ubuntu.go:182] provisioning hostname "old-k8s-version-762702"
	I1124 04:13:46.622313  472908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:13:46.640428  472908 main.go:143] libmachine: Using SSH client type: native
	I1124 04:13:46.640746  472908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33426 <nil> <nil>}
	I1124 04:13:46.640758  472908 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-762702 && echo "old-k8s-version-762702" | sudo tee /etc/hostname
	I1124 04:13:46.802208  472908 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-762702
	
	I1124 04:13:46.802284  472908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:13:46.822047  472908 main.go:143] libmachine: Using SSH client type: native
	I1124 04:13:46.822379  472908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33426 <nil> <nil>}
	I1124 04:13:46.822403  472908 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-762702' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-762702/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-762702' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 04:13:46.970877  472908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 04:13:46.970917  472908 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-289526/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-289526/.minikube}
	I1124 04:13:46.970969  472908 ubuntu.go:190] setting up certificates
	I1124 04:13:46.970979  472908 provision.go:84] configureAuth start
	I1124 04:13:46.971052  472908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-762702
	I1124 04:13:46.993312  472908 provision.go:143] copyHostCerts
	I1124 04:13:46.993401  472908 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem, removing ...
	I1124 04:13:46.993420  472908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem
	I1124 04:13:46.993503  472908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem (1082 bytes)
	I1124 04:13:46.993604  472908 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem, removing ...
	I1124 04:13:46.993617  472908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem
	I1124 04:13:46.993645  472908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem (1123 bytes)
	I1124 04:13:46.993751  472908 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem, removing ...
	I1124 04:13:46.993763  472908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem
	I1124 04:13:46.993791  472908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem (1675 bytes)
	I1124 04:13:46.993845  472908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-762702 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-762702]
	I1124 04:13:47.446820  472908 provision.go:177] copyRemoteCerts
	I1124 04:13:47.446888  472908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 04:13:47.446935  472908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:13:47.463932  472908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/old-k8s-version-762702/id_rsa Username:docker}
	I1124 04:13:47.566124  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 04:13:47.583683  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1124 04:13:47.600988  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 04:13:47.619125  472908 provision.go:87] duration metric: took 648.119496ms to configureAuth
	I1124 04:13:47.619152  472908 ubuntu.go:206] setting minikube options for container-runtime
	I1124 04:13:47.619352  472908 config.go:182] Loaded profile config "old-k8s-version-762702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 04:13:47.619457  472908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:13:47.637149  472908 main.go:143] libmachine: Using SSH client type: native
	I1124 04:13:47.637462  472908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33426 <nil> <nil>}
	I1124 04:13:47.637474  472908 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 04:13:48.008204  472908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 04:13:48.008290  472908 machine.go:97] duration metric: took 4.559644457s to provisionDockerMachine
	I1124 04:13:48.008320  472908 start.go:293] postStartSetup for "old-k8s-version-762702" (driver="docker")
	I1124 04:13:48.008359  472908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 04:13:48.008476  472908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 04:13:48.008543  472908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:13:48.027417  472908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/old-k8s-version-762702/id_rsa Username:docker}
	I1124 04:13:48.134817  472908 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 04:13:48.138654  472908 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 04:13:48.138692  472908 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 04:13:48.138734  472908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/addons for local assets ...
	I1124 04:13:48.138903  472908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/files for local assets ...
	I1124 04:13:48.139022  472908 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem -> 2913892.pem in /etc/ssl/certs
	I1124 04:13:48.139135  472908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 04:13:48.148946  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:13:48.167262  472908 start.go:296] duration metric: took 158.900199ms for postStartSetup
	I1124 04:13:48.167364  472908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 04:13:48.167418  472908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:13:48.185664  472908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/old-k8s-version-762702/id_rsa Username:docker}
	I1124 04:13:48.287803  472908 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 04:13:48.292841  472908 fix.go:56] duration metric: took 5.180702363s for fixHost
	I1124 04:13:48.292870  472908 start.go:83] releasing machines lock for "old-k8s-version-762702", held for 5.180756139s
	I1124 04:13:48.292950  472908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-762702
	I1124 04:13:48.310858  472908 ssh_runner.go:195] Run: cat /version.json
	I1124 04:13:48.310876  472908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 04:13:48.310909  472908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:13:48.310930  472908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:13:48.328636  472908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/old-k8s-version-762702/id_rsa Username:docker}
	I1124 04:13:48.330518  472908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/old-k8s-version-762702/id_rsa Username:docker}
	I1124 04:13:48.434254  472908 ssh_runner.go:195] Run: systemctl --version
	I1124 04:13:48.527259  472908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 04:13:48.569399  472908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 04:13:48.574795  472908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 04:13:48.574871  472908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 04:13:48.582872  472908 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 04:13:48.582895  472908 start.go:496] detecting cgroup driver to use...
	I1124 04:13:48.582928  472908 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 04:13:48.582976  472908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 04:13:48.598129  472908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 04:13:48.611571  472908 docker.go:218] disabling cri-docker service (if available) ...
	I1124 04:13:48.611680  472908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 04:13:48.627347  472908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 04:13:48.641021  472908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 04:13:48.764080  472908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 04:13:48.888025  472908 docker.go:234] disabling docker service ...
	I1124 04:13:48.888091  472908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 04:13:48.903536  472908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 04:13:48.916985  472908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 04:13:49.050246  472908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 04:13:49.195890  472908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 04:13:49.209751  472908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 04:13:49.225218  472908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1124 04:13:49.225308  472908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:13:49.235181  472908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 04:13:49.235274  472908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:13:49.244484  472908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:13:49.254835  472908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:13:49.263648  472908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 04:13:49.271897  472908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:13:49.281274  472908 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:13:49.290391  472908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:13:49.299678  472908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 04:13:49.307561  472908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 04:13:49.315115  472908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:13:49.440135  472908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 04:13:49.605005  472908 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 04:13:49.605087  472908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 04:13:49.609005  472908 start.go:564] Will wait 60s for crictl version
	I1124 04:13:49.609093  472908 ssh_runner.go:195] Run: which crictl
	I1124 04:13:49.612829  472908 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 04:13:49.637853  472908 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 04:13:49.637945  472908 ssh_runner.go:195] Run: crio --version
	I1124 04:13:49.673388  472908 ssh_runner.go:195] Run: crio --version
	I1124 04:13:49.714589  472908 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1124 04:13:49.717524  472908 cli_runner.go:164] Run: docker network inspect old-k8s-version-762702 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 04:13:49.733676  472908 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 04:13:49.737757  472908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:13:49.747222  472908 kubeadm.go:884] updating cluster {Name:old-k8s-version-762702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-762702 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 04:13:49.747354  472908 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 04:13:49.747415  472908 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:13:49.782232  472908 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:13:49.782259  472908 crio.go:433] Images already preloaded, skipping extraction
	I1124 04:13:49.782324  472908 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:13:49.808943  472908 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:13:49.808967  472908 cache_images.go:86] Images are preloaded, skipping loading
	I1124 04:13:49.808975  472908 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1124 04:13:49.809071  472908 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-762702 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-762702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 04:13:49.809150  472908 ssh_runner.go:195] Run: crio config
	I1124 04:13:49.884330  472908 cni.go:84] Creating CNI manager for ""
	I1124 04:13:49.884353  472908 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:13:49.884400  472908 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 04:13:49.884430  472908 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-762702 NodeName:old-k8s-version-762702 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 04:13:49.884598  472908 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-762702"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 04:13:49.884671  472908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1124 04:13:49.892291  472908 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 04:13:49.892385  472908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 04:13:49.899738  472908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1124 04:13:49.912667  472908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 04:13:49.925261  472908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1124 04:13:49.938702  472908 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 04:13:49.942186  472908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:13:49.952586  472908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:13:50.075118  472908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:13:50.096741  472908 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702 for IP: 192.168.85.2
	I1124 04:13:50.096765  472908 certs.go:195] generating shared ca certs ...
	I1124 04:13:50.096808  472908 certs.go:227] acquiring lock for ca certs: {Name:mk13d1a72b5c7901cf99d88e558f62c6b8512807 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:13:50.097011  472908 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key
	I1124 04:13:50.097092  472908 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key
	I1124 04:13:50.097110  472908 certs.go:257] generating profile certs ...
	I1124 04:13:50.097249  472908 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/client.key
	I1124 04:13:50.097344  472908 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/apiserver.key.8fa10a20
	I1124 04:13:50.097424  472908 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/proxy-client.key
	I1124 04:13:50.097557  472908 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem (1338 bytes)
	W1124 04:13:50.097611  472908 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389_empty.pem, impossibly tiny 0 bytes
	I1124 04:13:50.097628  472908 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 04:13:50.097675  472908 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem (1082 bytes)
	I1124 04:13:50.097720  472908 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem (1123 bytes)
	I1124 04:13:50.097753  472908 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem (1675 bytes)
	I1124 04:13:50.097862  472908 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:13:50.098594  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 04:13:50.122978  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 04:13:50.144621  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 04:13:50.168399  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 04:13:50.198261  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1124 04:13:50.217699  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 04:13:50.243700  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 04:13:50.270104  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 04:13:50.297220  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /usr/share/ca-certificates/2913892.pem (1708 bytes)
	I1124 04:13:50.322851  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 04:13:50.345027  472908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem --> /usr/share/ca-certificates/291389.pem (1338 bytes)
	I1124 04:13:50.363912  472908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 04:13:50.377647  472908 ssh_runner.go:195] Run: openssl version
	I1124 04:13:50.384187  472908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2913892.pem && ln -fs /usr/share/ca-certificates/2913892.pem /etc/ssl/certs/2913892.pem"
	I1124 04:13:50.392569  472908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2913892.pem
	I1124 04:13:50.396302  472908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 03:20 /usr/share/ca-certificates/2913892.pem
	I1124 04:13:50.396398  472908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2913892.pem
	I1124 04:13:50.439770  472908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2913892.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 04:13:50.447946  472908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 04:13:50.456187  472908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:13:50.459899  472908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 03:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:13:50.459967  472908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:13:50.500832  472908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 04:13:50.508897  472908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291389.pem && ln -fs /usr/share/ca-certificates/291389.pem /etc/ssl/certs/291389.pem"
	I1124 04:13:50.517318  472908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291389.pem
	I1124 04:13:50.521166  472908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 03:20 /usr/share/ca-certificates/291389.pem
	I1124 04:13:50.521248  472908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291389.pem
	I1124 04:13:50.566723  472908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291389.pem /etc/ssl/certs/51391683.0"
	I1124 04:13:50.574934  472908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 04:13:50.578792  472908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 04:13:50.620168  472908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 04:13:50.661568  472908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 04:13:50.702922  472908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 04:13:50.754620  472908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 04:13:50.820890  472908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 04:13:50.879323  472908 kubeadm.go:401] StartCluster: {Name:old-k8s-version-762702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-762702 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:13:50.879413  472908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 04:13:50.879527  472908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 04:13:50.948487  472908 cri.go:89] found id: "3dda3e01322889ba6ae662cf36d250b792923223d15823ac24db5c9c42c3272c"
	I1124 04:13:50.948510  472908 cri.go:89] found id: "b120202a1fd97058a5aedf9f2bb21f0de530aaeecb2a7185c93067ac1ee7214d"
	I1124 04:13:50.948516  472908 cri.go:89] found id: "54e9d746b3ca2739d8be883f3078b9d3c9c03574f0b6d7975d0cec75f406d75d"
	I1124 04:13:50.948551  472908 cri.go:89] found id: "bbf50eb55a9501a33ac2de73d034111945ecb64e8907c5f3016c733432c67d30"
	I1124 04:13:50.948562  472908 cri.go:89] found id: ""
	I1124 04:13:50.948614  472908 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 04:13:50.968616  472908 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:13:50Z" level=error msg="open /run/runc: no such file or directory"
	I1124 04:13:50.968721  472908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 04:13:50.994502  472908 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 04:13:50.994521  472908 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 04:13:50.994601  472908 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 04:13:51.016821  472908 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 04:13:51.017462  472908 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-762702" does not appear in /home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:13:51.017773  472908 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-289526/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-762702" cluster setting kubeconfig missing "old-k8s-version-762702" context setting]
	I1124 04:13:51.018258  472908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/kubeconfig: {Name:mkb927d54c1c5489b1562496c31c4a11f46f8c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:13:51.019853  472908 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 04:13:51.038779  472908 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1124 04:13:51.038817  472908 kubeadm.go:602] duration metric: took 44.289317ms to restartPrimaryControlPlane
	I1124 04:13:51.038849  472908 kubeadm.go:403] duration metric: took 159.536046ms to StartCluster
	I1124 04:13:51.038873  472908 settings.go:142] acquiring lock: {Name:mkbb4fe81234aed150705ba63a85f83232f71688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:13:51.038954  472908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:13:51.039946  472908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/kubeconfig: {Name:mkb927d54c1c5489b1562496c31c4a11f46f8c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:13:51.040216  472908 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 04:13:51.040540  472908 config.go:182] Loaded profile config "old-k8s-version-762702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 04:13:51.040694  472908 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 04:13:51.040959  472908 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-762702"
	I1124 04:13:51.040987  472908 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-762702"
	W1124 04:13:51.041010  472908 addons.go:248] addon storage-provisioner should already be in state true
	I1124 04:13:51.041043  472908 host.go:66] Checking if "old-k8s-version-762702" exists ...
	I1124 04:13:51.041745  472908 cli_runner.go:164] Run: docker container inspect old-k8s-version-762702 --format={{.State.Status}}
	I1124 04:13:51.042243  472908 addons.go:70] Setting dashboard=true in profile "old-k8s-version-762702"
	I1124 04:13:51.042267  472908 addons.go:239] Setting addon dashboard=true in "old-k8s-version-762702"
	W1124 04:13:51.042275  472908 addons.go:248] addon dashboard should already be in state true
	I1124 04:13:51.042302  472908 host.go:66] Checking if "old-k8s-version-762702" exists ...
	I1124 04:13:51.042732  472908 cli_runner.go:164] Run: docker container inspect old-k8s-version-762702 --format={{.State.Status}}
	I1124 04:13:51.043702  472908 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-762702"
	I1124 04:13:51.043731  472908 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-762702"
	I1124 04:13:51.044065  472908 cli_runner.go:164] Run: docker container inspect old-k8s-version-762702 --format={{.State.Status}}
	I1124 04:13:51.047077  472908 out.go:179] * Verifying Kubernetes components...
	I1124 04:13:51.050418  472908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:13:51.100872  472908 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 04:13:51.104025  472908 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:13:51.104052  472908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 04:13:51.104114  472908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:13:51.106560  472908 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 04:13:51.111852  472908 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-762702"
	W1124 04:13:51.111876  472908 addons.go:248] addon default-storageclass should already be in state true
	I1124 04:13:51.111902  472908 host.go:66] Checking if "old-k8s-version-762702" exists ...
	I1124 04:13:51.112318  472908 cli_runner.go:164] Run: docker container inspect old-k8s-version-762702 --format={{.State.Status}}
	I1124 04:13:51.119308  472908 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 04:13:51.126554  472908 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 04:13:51.126592  472908 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 04:13:51.126816  472908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:13:51.138715  472908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/old-k8s-version-762702/id_rsa Username:docker}
	I1124 04:13:51.163820  472908 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 04:13:51.163848  472908 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 04:13:51.163923  472908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762702
	I1124 04:13:51.179260  472908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/old-k8s-version-762702/id_rsa Username:docker}
	I1124 04:13:51.196539  472908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/old-k8s-version-762702/id_rsa Username:docker}
	I1124 04:13:51.387802  472908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:13:51.406864  472908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:13:51.416095  472908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 04:13:51.440418  472908 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-762702" to be "Ready" ...
	I1124 04:13:51.485201  472908 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 04:13:51.485277  472908 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 04:13:51.537593  472908 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 04:13:51.537667  472908 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 04:13:51.648993  472908 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 04:13:51.649068  472908 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 04:13:51.696113  472908 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 04:13:51.696176  472908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 04:13:51.740268  472908 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 04:13:51.740339  472908 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 04:13:51.758975  472908 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 04:13:51.759046  472908 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 04:13:51.775354  472908 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 04:13:51.775431  472908 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 04:13:51.798594  472908 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 04:13:51.798666  472908 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 04:13:51.820562  472908 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 04:13:51.820642  472908 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 04:13:51.840651  472908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 04:13:55.630305  472908 node_ready.go:49] node "old-k8s-version-762702" is "Ready"
	I1124 04:13:55.630338  472908 node_ready.go:38] duration metric: took 4.189829283s for node "old-k8s-version-762702" to be "Ready" ...
	I1124 04:13:55.630352  472908 api_server.go:52] waiting for apiserver process to appear ...
	I1124 04:13:55.630414  472908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 04:13:57.300253  472908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.89330774s)
	I1124 04:13:57.300332  472908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.884169934s)
	I1124 04:13:57.736321  472908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.89558088s)
	I1124 04:13:57.736605  472908 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.106177392s)
	I1124 04:13:57.736655  472908 api_server.go:72] duration metric: took 6.696407544s to wait for apiserver process to appear ...
	I1124 04:13:57.736697  472908 api_server.go:88] waiting for apiserver healthz status ...
	I1124 04:13:57.736716  472908 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 04:13:57.739761  472908 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-762702 addons enable metrics-server
	
	I1124 04:13:57.743063  472908 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1124 04:13:57.745352  472908 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 04:13:57.746830  472908 api_server.go:141] control plane version: v1.28.0
	I1124 04:13:57.746856  472908 api_server.go:131] duration metric: took 10.15176ms to wait for apiserver health ...
	I1124 04:13:57.746866  472908 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 04:13:57.747031  472908 addons.go:530] duration metric: took 6.7063316s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1124 04:13:57.750728  472908 system_pods.go:59] 8 kube-system pods found
	I1124 04:13:57.750766  472908 system_pods.go:61] "coredns-5dd5756b68-c5hgr" [7d0b287f-b2e8-461f-abf4-71700b66caf8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:13:57.750808  472908 system_pods.go:61] "etcd-old-k8s-version-762702" [62d0f56d-8e43-47b5-baf7-2af95f42cd81] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 04:13:57.750821  472908 system_pods.go:61] "kindnet-lkhzw" [db06bd2a-7e8a-49e3-a17f-62b681f600d1] Running
	I1124 04:13:57.750828  472908 system_pods.go:61] "kube-apiserver-old-k8s-version-762702" [efc26447-b9f1-4aa7-a2b8-e2ef56674415] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 04:13:57.750835  472908 system_pods.go:61] "kube-controller-manager-old-k8s-version-762702" [9817fe2f-c899-4ef9-8e2f-c0b22566b389] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 04:13:57.750843  472908 system_pods.go:61] "kube-proxy-7ml4n" [1ed410af-141e-4197-9a5c-6900dc8e35e6] Running
	I1124 04:13:57.750850  472908 system_pods.go:61] "kube-scheduler-old-k8s-version-762702" [e1a7d08c-4e60-4f84-a997-3baef7354877] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 04:13:57.750854  472908 system_pods.go:61] "storage-provisioner" [8af39921-2789-4cc5-974a-89f0667a6e47] Running
	I1124 04:13:57.750862  472908 system_pods.go:74] duration metric: took 3.990684ms to wait for pod list to return data ...
	I1124 04:13:57.750881  472908 default_sa.go:34] waiting for default service account to be created ...
	I1124 04:13:57.753306  472908 default_sa.go:45] found service account: "default"
	I1124 04:13:57.753333  472908 default_sa.go:55] duration metric: took 2.445858ms for default service account to be created ...
	I1124 04:13:57.753343  472908 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 04:13:57.756974  472908 system_pods.go:86] 8 kube-system pods found
	I1124 04:13:57.757007  472908 system_pods.go:89] "coredns-5dd5756b68-c5hgr" [7d0b287f-b2e8-461f-abf4-71700b66caf8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:13:57.757017  472908 system_pods.go:89] "etcd-old-k8s-version-762702" [62d0f56d-8e43-47b5-baf7-2af95f42cd81] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 04:13:57.757023  472908 system_pods.go:89] "kindnet-lkhzw" [db06bd2a-7e8a-49e3-a17f-62b681f600d1] Running
	I1124 04:13:57.757031  472908 system_pods.go:89] "kube-apiserver-old-k8s-version-762702" [efc26447-b9f1-4aa7-a2b8-e2ef56674415] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 04:13:57.757038  472908 system_pods.go:89] "kube-controller-manager-old-k8s-version-762702" [9817fe2f-c899-4ef9-8e2f-c0b22566b389] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 04:13:57.757043  472908 system_pods.go:89] "kube-proxy-7ml4n" [1ed410af-141e-4197-9a5c-6900dc8e35e6] Running
	I1124 04:13:57.757053  472908 system_pods.go:89] "kube-scheduler-old-k8s-version-762702" [e1a7d08c-4e60-4f84-a997-3baef7354877] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 04:13:57.757057  472908 system_pods.go:89] "storage-provisioner" [8af39921-2789-4cc5-974a-89f0667a6e47] Running
	I1124 04:13:57.757071  472908 system_pods.go:126] duration metric: took 3.72211ms to wait for k8s-apps to be running ...
	I1124 04:13:57.757082  472908 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 04:13:57.757151  472908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:13:57.771318  472908 system_svc.go:56] duration metric: took 14.225801ms WaitForService to wait for kubelet
	I1124 04:13:57.771391  472908 kubeadm.go:587] duration metric: took 6.731141299s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 04:13:57.771425  472908 node_conditions.go:102] verifying NodePressure condition ...
	I1124 04:13:57.774417  472908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 04:13:57.774534  472908 node_conditions.go:123] node cpu capacity is 2
	I1124 04:13:57.774562  472908 node_conditions.go:105] duration metric: took 3.117786ms to run NodePressure ...
	I1124 04:13:57.774603  472908 start.go:242] waiting for startup goroutines ...
	I1124 04:13:57.774630  472908 start.go:247] waiting for cluster config update ...
	I1124 04:13:57.774656  472908 start.go:256] writing updated cluster config ...
	I1124 04:13:57.774998  472908 ssh_runner.go:195] Run: rm -f paused
	I1124 04:13:57.779299  472908 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 04:13:57.784445  472908 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-c5hgr" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 04:13:59.791186  472908 pod_ready.go:104] pod "coredns-5dd5756b68-c5hgr" is not "Ready", error: <nil>
	W1124 04:14:02.290635  472908 pod_ready.go:104] pod "coredns-5dd5756b68-c5hgr" is not "Ready", error: <nil>
	W1124 04:14:04.790138  472908 pod_ready.go:104] pod "coredns-5dd5756b68-c5hgr" is not "Ready", error: <nil>
	W1124 04:14:06.790580  472908 pod_ready.go:104] pod "coredns-5dd5756b68-c5hgr" is not "Ready", error: <nil>
	W1124 04:14:08.791287  472908 pod_ready.go:104] pod "coredns-5dd5756b68-c5hgr" is not "Ready", error: <nil>
	W1124 04:14:11.289930  472908 pod_ready.go:104] pod "coredns-5dd5756b68-c5hgr" is not "Ready", error: <nil>
	W1124 04:14:13.290809  472908 pod_ready.go:104] pod "coredns-5dd5756b68-c5hgr" is not "Ready", error: <nil>
	W1124 04:14:15.792123  472908 pod_ready.go:104] pod "coredns-5dd5756b68-c5hgr" is not "Ready", error: <nil>
	W1124 04:14:18.290640  472908 pod_ready.go:104] pod "coredns-5dd5756b68-c5hgr" is not "Ready", error: <nil>
	W1124 04:14:20.791667  472908 pod_ready.go:104] pod "coredns-5dd5756b68-c5hgr" is not "Ready", error: <nil>
	W1124 04:14:23.290285  472908 pod_ready.go:104] pod "coredns-5dd5756b68-c5hgr" is not "Ready", error: <nil>
	W1124 04:14:25.291478  472908 pod_ready.go:104] pod "coredns-5dd5756b68-c5hgr" is not "Ready", error: <nil>
	W1124 04:14:27.790390  472908 pod_ready.go:104] pod "coredns-5dd5756b68-c5hgr" is not "Ready", error: <nil>
	W1124 04:14:29.790511  472908 pod_ready.go:104] pod "coredns-5dd5756b68-c5hgr" is not "Ready", error: <nil>
	I1124 04:14:31.290925  472908 pod_ready.go:94] pod "coredns-5dd5756b68-c5hgr" is "Ready"
	I1124 04:14:31.290954  472908 pod_ready.go:86] duration metric: took 33.506443249s for pod "coredns-5dd5756b68-c5hgr" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:14:31.294343  472908 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-762702" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:14:31.299377  472908 pod_ready.go:94] pod "etcd-old-k8s-version-762702" is "Ready"
	I1124 04:14:31.299401  472908 pod_ready.go:86] duration metric: took 5.033101ms for pod "etcd-old-k8s-version-762702" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:14:31.302665  472908 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-762702" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:14:31.307382  472908 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-762702" is "Ready"
	I1124 04:14:31.307409  472908 pod_ready.go:86] duration metric: took 4.719234ms for pod "kube-apiserver-old-k8s-version-762702" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:14:31.310352  472908 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-762702" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:14:31.488811  472908 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-762702" is "Ready"
	I1124 04:14:31.488844  472908 pod_ready.go:86] duration metric: took 178.463686ms for pod "kube-controller-manager-old-k8s-version-762702" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:14:31.689744  472908 pod_ready.go:83] waiting for pod "kube-proxy-7ml4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:14:32.088499  472908 pod_ready.go:94] pod "kube-proxy-7ml4n" is "Ready"
	I1124 04:14:32.088527  472908 pod_ready.go:86] duration metric: took 398.75422ms for pod "kube-proxy-7ml4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:14:32.289546  472908 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-762702" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:14:32.689521  472908 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-762702" is "Ready"
	I1124 04:14:32.689553  472908 pod_ready.go:86] duration metric: took 399.979638ms for pod "kube-scheduler-old-k8s-version-762702" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:14:32.689567  472908 pod_ready.go:40] duration metric: took 34.91019925s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 04:14:32.752268  472908 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1124 04:14:32.755445  472908 out.go:203] 
	W1124 04:14:32.758392  472908 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 04:14:32.761317  472908 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 04:14:32.764242  472908 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-762702" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 04:14:28 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:28.125816174Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=449f44cd-c399-4440-9f80-aaf585b4d864 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:14:28 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:28.12736584Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b00e97b7-f749-4690-8464-9c5f99bb9fc0 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:14:28 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:28.128494125Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6zmfl/dashboard-metrics-scraper" id=bd021e87-f10a-42c3-89fd-ed9140d9fa4e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:14:28 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:28.128737813Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:14:28 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:28.142137139Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:14:28 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:28.142784032Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:14:28 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:28.159664077Z" level=info msg="Created container b89b8f495cfe78a41475c2ab6476b4f7445f50c81d516ab1dd5a3fd23f6c3420: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6zmfl/dashboard-metrics-scraper" id=bd021e87-f10a-42c3-89fd-ed9140d9fa4e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:14:28 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:28.160696345Z" level=info msg="Starting container: b89b8f495cfe78a41475c2ab6476b4f7445f50c81d516ab1dd5a3fd23f6c3420" id=d1df45a6-3a3b-4a9c-a5f4-3b44dbb34b3d name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:14:28 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:28.162367301Z" level=info msg="Started container" PID=1680 containerID=b89b8f495cfe78a41475c2ab6476b4f7445f50c81d516ab1dd5a3fd23f6c3420 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6zmfl/dashboard-metrics-scraper id=d1df45a6-3a3b-4a9c-a5f4-3b44dbb34b3d name=/runtime.v1.RuntimeService/StartContainer sandboxID=8d6d700617ea5c64410d1f3a1d752c15f6fb793be2a37757627990a7d0533bfd
	Nov 24 04:14:28 old-k8s-version-762702 conmon[1678]: conmon b89b8f495cfe78a41475 <ninfo>: container 1680 exited with status 1
	Nov 24 04:14:28 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:28.494947121Z" level=info msg="Removing container: 7269e4ab74c239334a8c02dde1ef62edd1a14e60113f1a1f9297857da30b33dd" id=26bce0fa-3c76-4133-82ef-c3c9804621ad name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 04:14:28 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:28.506137835Z" level=info msg="Error loading conmon cgroup of container 7269e4ab74c239334a8c02dde1ef62edd1a14e60113f1a1f9297857da30b33dd: cgroup deleted" id=26bce0fa-3c76-4133-82ef-c3c9804621ad name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 04:14:28 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:28.509546479Z" level=info msg="Removed container 7269e4ab74c239334a8c02dde1ef62edd1a14e60113f1a1f9297857da30b33dd: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6zmfl/dashboard-metrics-scraper" id=26bce0fa-3c76-4133-82ef-c3c9804621ad name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 04:14:37 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:37.031207948Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:14:37 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:37.038976995Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:14:37 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:37.039024651Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:14:37 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:37.03904843Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:14:37 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:37.043280101Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:14:37 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:37.043313348Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:14:37 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:37.043336184Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:14:37 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:37.047818656Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:14:37 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:37.047853799Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:14:37 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:37.047879202Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:14:37 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:37.052231859Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:14:37 old-k8s-version-762702 crio[664]: time="2025-11-24T04:14:37.05226715Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	b89b8f495cfe7       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago      Exited              dashboard-metrics-scraper   2                   8d6d700617ea5       dashboard-metrics-scraper-5f989dc9cf-6zmfl       kubernetes-dashboard
	6d5b65108d13f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago      Running             storage-provisioner         2                   f7905f1f7d24c       storage-provisioner                              kube-system
	0b878cc21f861       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago      Running             kubernetes-dashboard        0                   dada1b8525d12       kubernetes-dashboard-8694d4445c-tzxjs            kubernetes-dashboard
	e5ab4162a62ac       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago      Running             busybox                     1                   6f576ae5c172b       busybox                                          default
	7a9b8cf11ce99       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           53 seconds ago      Running             coredns                     1                   89ca73cf383f0       coredns-5dd5756b68-c5hgr                         kube-system
	d08b3a0aabdbd       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           53 seconds ago      Running             kube-proxy                  1                   b8beb494f1517       kube-proxy-7ml4n                                 kube-system
	0baddab83fc97       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago      Exited              storage-provisioner         1                   f7905f1f7d24c       storage-provisioner                              kube-system
	8cd173cea9a6e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago      Running             kindnet-cni                 1                   717150b3ececc       kindnet-lkhzw                                    kube-system
	3dda3e0132288       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           59 seconds ago      Running             kube-scheduler              1                   257f177ad5f4c       kube-scheduler-old-k8s-version-762702            kube-system
	b120202a1fd97       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           59 seconds ago      Running             etcd                        1                   f84c463b326d3       etcd-old-k8s-version-762702                      kube-system
	54e9d746b3ca2       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           59 seconds ago      Running             kube-apiserver              1                   adcb8d1402134       kube-apiserver-old-k8s-version-762702            kube-system
	bbf50eb55a950       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           59 seconds ago      Running             kube-controller-manager     1                   078830a18384b       kube-controller-manager-old-k8s-version-762702   kube-system
	
	
	==> coredns [7a9b8cf11ce99604979c137a579bdbc8e2fadf7960914a459db24120e33d0076] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35998 - 17945 "HINFO IN 1242135383228280311.8938716767801997800. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033415739s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-762702
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-762702
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=old-k8s-version-762702
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T04_12_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 04:12:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-762702
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 04:14:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 04:14:26 +0000   Mon, 24 Nov 2025 04:12:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 04:14:26 +0000   Mon, 24 Nov 2025 04:12:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 04:14:26 +0000   Mon, 24 Nov 2025 04:12:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 04:14:26 +0000   Mon, 24 Nov 2025 04:13:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-762702
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 304a86241bf1bbb85bd31db5692386d7
	  System UUID:                1df4042d-4e31-477e-85db-12513191744f
	  Boot ID:                    e6ca431c-3a35-478f-87f6-f49cc4bc8a65
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-5dd5756b68-c5hgr                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     110s
	  kube-system                 etcd-old-k8s-version-762702                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m2s
	  kube-system                 kindnet-lkhzw                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-old-k8s-version-762702             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-old-k8s-version-762702    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-7ml4n                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-old-k8s-version-762702             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-6zmfl        0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-tzxjs             0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 109s                   kube-proxy       
	  Normal  Starting                 52s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-762702 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-762702 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-762702 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m3s                   kubelet          Node old-k8s-version-762702 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m3s                   kubelet          Node old-k8s-version-762702 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s                   kubelet          Node old-k8s-version-762702 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m3s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s                   node-controller  Node old-k8s-version-762702 event: Registered Node old-k8s-version-762702 in Controller
	  Normal  NodeReady                96s                    kubelet          Node old-k8s-version-762702 status is now: NodeReady
	  Normal  Starting                 60s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)      kubelet          Node old-k8s-version-762702 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)      kubelet          Node old-k8s-version-762702 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)      kubelet          Node old-k8s-version-762702 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           43s                    node-controller  Node old-k8s-version-762702 event: Registered Node old-k8s-version-762702 in Controller
	
	
	==> dmesg <==
	[Nov24 03:46] overlayfs: idmapped layers are currently not supported
	[Nov24 03:51] overlayfs: idmapped layers are currently not supported
	[ +32.185990] overlayfs: idmapped layers are currently not supported
	[Nov24 03:52] overlayfs: idmapped layers are currently not supported
	[Nov24 03:54] overlayfs: idmapped layers are currently not supported
	[Nov24 03:55] overlayfs: idmapped layers are currently not supported
	[Nov24 03:56] overlayfs: idmapped layers are currently not supported
	[Nov24 03:57] overlayfs: idmapped layers are currently not supported
	[  +3.077077] overlayfs: idmapped layers are currently not supported
	[Nov24 03:58] overlayfs: idmapped layers are currently not supported
	[ +17.825126] overlayfs: idmapped layers are currently not supported
	[Nov24 03:59] overlayfs: idmapped layers are currently not supported
	[ +28.164324] overlayfs: idmapped layers are currently not supported
	[Nov24 04:01] overlayfs: idmapped layers are currently not supported
	[Nov24 04:02] overlayfs: idmapped layers are currently not supported
	[Nov24 04:04] overlayfs: idmapped layers are currently not supported
	[Nov24 04:05] overlayfs: idmapped layers are currently not supported
	[Nov24 04:06] overlayfs: idmapped layers are currently not supported
	[Nov24 04:08] overlayfs: idmapped layers are currently not supported
	[Nov24 04:10] overlayfs: idmapped layers are currently not supported
	[Nov24 04:11] overlayfs: idmapped layers are currently not supported
	[ +23.918932] overlayfs: idmapped layers are currently not supported
	[Nov24 04:12] overlayfs: idmapped layers are currently not supported
	[ +35.202347] overlayfs: idmapped layers are currently not supported
	[Nov24 04:13] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b120202a1fd97058a5aedf9f2bb21f0de530aaeecb2a7185c93067ac1ee7214d] <==
	{"level":"info","ts":"2025-11-24T04:13:51.111175Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-24T04:13:51.111215Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-24T04:13:51.111461Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-24T04:13:51.111583Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-24T04:13:51.111808Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T04:13:51.111886Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T04:13:51.208386Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-24T04:13:51.208624Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-24T04:13:51.208655Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-24T04:13:51.208752Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-24T04:13:51.20876Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-24T04:13:52.254511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-24T04:13:52.254625Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-24T04:13:52.254684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-24T04:13:52.254724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-24T04:13:52.25477Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-24T04:13:52.254812Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-24T04:13:52.254851Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-24T04:13:52.25714Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-762702 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-24T04:13:52.257336Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T04:13:52.257853Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-24T04:13:52.257949Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-24T04:13:52.258012Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T04:13:52.259507Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-24T04:13:52.26093Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 04:14:50 up  2:56,  0 user,  load average: 1.29, 2.71, 2.54
	Linux old-k8s-version-762702 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8cd173cea9a6e0c8848501a56105eae2eb7845d3ad6d5d080437ff7aea8df499] <==
	I1124 04:13:56.839412       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 04:13:56.840212       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 04:13:56.840345       1 main.go:148] setting mtu 1500 for CNI 
	I1124 04:13:56.840358       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 04:13:56.840369       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T04:13:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 04:13:57.027056       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 04:13:57.027204       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 04:13:57.027241       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 04:13:57.027860       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 04:14:27.027666       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1124 04:14:27.027683       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 04:14:27.027801       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 04:14:27.029021       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1124 04:14:28.527409       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 04:14:28.527443       1 metrics.go:72] Registering metrics
	I1124 04:14:28.527511       1 controller.go:711] "Syncing nftables rules"
	I1124 04:14:37.030855       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 04:14:37.030921       1 main.go:301] handling current node
	I1124 04:14:47.033426       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 04:14:47.033481       1 main.go:301] handling current node
	
	
	==> kube-apiserver [54e9d746b3ca2739d8be883f3078b9d3c9c03574f0b6d7975d0cec75f406d75d] <==
	I1124 04:13:55.664723       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1124 04:13:55.664947       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1124 04:13:55.677578       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 04:13:55.724955       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1124 04:13:55.743812       1 shared_informer.go:318] Caches are synced for configmaps
	I1124 04:13:55.746146       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1124 04:13:55.746190       1 aggregator.go:166] initial CRD sync complete...
	I1124 04:13:55.746198       1 autoregister_controller.go:141] Starting autoregister controller
	I1124 04:13:55.746205       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 04:13:55.746221       1 cache.go:39] Caches are synced for autoregister controller
	I1124 04:13:55.762901       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1124 04:13:55.765918       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 04:13:55.774916       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E1124 04:13:55.834993       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 04:13:56.313448       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 04:13:57.552281       1 controller.go:624] quota admission added evaluator for: namespaces
	I1124 04:13:57.601826       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1124 04:13:57.630638       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 04:13:57.641714       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 04:13:57.653722       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1124 04:13:57.709973       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.154.195"}
	I1124 04:13:57.729512       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.174.142"}
	I1124 04:14:07.675906       1 controller.go:624] quota admission added evaluator for: endpoints
	I1124 04:14:07.712456       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 04:14:07.737598       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [bbf50eb55a9501a33ac2de73d034111945ecb64e8907c5f3016c733432c67d30] <==
	I1124 04:14:07.761083       1 shared_informer.go:318] Caches are synced for persistent volume
	I1124 04:14:07.794189       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-tzxjs"
	I1124 04:14:07.794335       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-6zmfl"
	I1124 04:14:07.807725       1 shared_informer.go:318] Caches are synced for disruption
	I1124 04:14:07.823025       1 shared_informer.go:318] Caches are synced for resource quota
	I1124 04:14:07.839135       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="84.261167ms"
	I1124 04:14:07.839905       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="93.074545ms"
	I1124 04:14:07.872849       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="32.898586ms"
	I1124 04:14:07.872920       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="33.47µs"
	I1124 04:14:07.872964       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="33.788755ms"
	I1124 04:14:07.873005       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="22.089µs"
	I1124 04:14:07.882837       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.572µs"
	I1124 04:14:07.887332       1 shared_informer.go:318] Caches are synced for resource quota
	I1124 04:14:08.243139       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 04:14:08.243173       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1124 04:14:08.244544       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 04:14:13.491336       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="24.790463ms"
	I1124 04:14:13.491655       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="67.84µs"
	I1124 04:14:17.478732       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="57.019µs"
	I1124 04:14:18.489401       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.476µs"
	I1124 04:14:19.481369       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.22µs"
	I1124 04:14:29.512108       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="100.588µs"
	I1124 04:14:31.114622       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.338605ms"
	I1124 04:14:31.115179       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="53.498µs"
	I1124 04:14:38.139824       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="99.284µs"
	
	
	==> kube-proxy [d08b3a0aabdbd70340b4745beb5f9d34a57c7e1f07a3837a7b5ed36377e70cff] <==
	I1124 04:13:57.181097       1 server_others.go:69] "Using iptables proxy"
	I1124 04:13:57.196742       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1124 04:13:57.323568       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 04:13:57.328248       1 server_others.go:152] "Using iptables Proxier"
	I1124 04:13:57.328348       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1124 04:13:57.328393       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1124 04:13:57.328457       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1124 04:13:57.328946       1 server.go:846] "Version info" version="v1.28.0"
	I1124 04:13:57.328998       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:13:57.329778       1 config.go:188] "Starting service config controller"
	I1124 04:13:57.329883       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1124 04:13:57.329936       1 config.go:97] "Starting endpoint slice config controller"
	I1124 04:13:57.329968       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1124 04:13:57.330448       1 config.go:315] "Starting node config controller"
	I1124 04:13:57.332259       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1124 04:13:57.430187       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1124 04:13:57.430242       1 shared_informer.go:318] Caches are synced for service config
	I1124 04:13:57.433840       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [3dda3e01322889ba6ae662cf36d250b792923223d15823ac24db5c9c42c3272c] <==
	I1124 04:13:53.404150       1 serving.go:348] Generated self-signed cert in-memory
	W1124 04:13:55.622899       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 04:13:55.623030       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 04:13:55.623071       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 04:13:55.623104       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 04:13:55.729690       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1124 04:13:55.729798       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:13:55.731897       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1124 04:13:55.732036       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1124 04:13:55.732172       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 04:13:55.744982       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1124 04:13:55.846339       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 24 04:14:07 old-k8s-version-762702 kubelet[794]: I1124 04:14:07.818481     794 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xrpr\" (UniqueName: \"kubernetes.io/projected/6cadf852-5cc2-4f08-ad93-6d8f2962ce1e-kube-api-access-6xrpr\") pod \"kubernetes-dashboard-8694d4445c-tzxjs\" (UID: \"6cadf852-5cc2-4f08-ad93-6d8f2962ce1e\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-tzxjs"
	Nov 24 04:14:07 old-k8s-version-762702 kubelet[794]: I1124 04:14:07.818691     794 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6cadf852-5cc2-4f08-ad93-6d8f2962ce1e-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-tzxjs\" (UID: \"6cadf852-5cc2-4f08-ad93-6d8f2962ce1e\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-tzxjs"
	Nov 24 04:14:07 old-k8s-version-762702 kubelet[794]: I1124 04:14:07.822045     794 topology_manager.go:215] "Topology Admit Handler" podUID="614839c1-1d2d-4342-93dc-0cae816d580f" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-6zmfl"
	Nov 24 04:14:07 old-k8s-version-762702 kubelet[794]: I1124 04:14:07.919514     794 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/614839c1-1d2d-4342-93dc-0cae816d580f-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-6zmfl\" (UID: \"614839c1-1d2d-4342-93dc-0cae816d580f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6zmfl"
	Nov 24 04:14:07 old-k8s-version-762702 kubelet[794]: I1124 04:14:07.919573     794 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmd86\" (UniqueName: \"kubernetes.io/projected/614839c1-1d2d-4342-93dc-0cae816d580f-kube-api-access-bmd86\") pod \"dashboard-metrics-scraper-5f989dc9cf-6zmfl\" (UID: \"614839c1-1d2d-4342-93dc-0cae816d580f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6zmfl"
	Nov 24 04:14:08 old-k8s-version-762702 kubelet[794]: W1124 04:14:08.150682     794 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/b9dfaaddc60d2719aecb1147663224db018847e7dac0caa2182ee474b875f47a/crio-dada1b8525d12ffd79de05bbcf039d2367a00aa34e983585529f0b9aa9ae09bc WatchSource:0}: Error finding container dada1b8525d12ffd79de05bbcf039d2367a00aa34e983585529f0b9aa9ae09bc: Status 404 returned error can't find the container with id dada1b8525d12ffd79de05bbcf039d2367a00aa34e983585529f0b9aa9ae09bc
	Nov 24 04:14:08 old-k8s-version-762702 kubelet[794]: W1124 04:14:08.170867     794 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/b9dfaaddc60d2719aecb1147663224db018847e7dac0caa2182ee474b875f47a/crio-8d6d700617ea5c64410d1f3a1d752c15f6fb793be2a37757627990a7d0533bfd WatchSource:0}: Error finding container 8d6d700617ea5c64410d1f3a1d752c15f6fb793be2a37757627990a7d0533bfd: Status 404 returned error can't find the container with id 8d6d700617ea5c64410d1f3a1d752c15f6fb793be2a37757627990a7d0533bfd
	Nov 24 04:14:17 old-k8s-version-762702 kubelet[794]: I1124 04:14:17.458755     794 scope.go:117] "RemoveContainer" containerID="9adb6b17185b6682e8c2c8dd1a99c7a053f8b55e59d713d66db3fa31374bb565"
	Nov 24 04:14:17 old-k8s-version-762702 kubelet[794]: I1124 04:14:17.476696     794 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-tzxjs" podStartSLOduration=5.762431201 podCreationTimestamp="2025-11-24 04:14:07 +0000 UTC" firstStartedPulling="2025-11-24 04:14:08.158762705 +0000 UTC m=+18.058497948" lastFinishedPulling="2025-11-24 04:14:12.872966972 +0000 UTC m=+22.772702215" observedRunningTime="2025-11-24 04:14:13.467003062 +0000 UTC m=+23.366738337" watchObservedRunningTime="2025-11-24 04:14:17.476635468 +0000 UTC m=+27.376370719"
	Nov 24 04:14:18 old-k8s-version-762702 kubelet[794]: I1124 04:14:18.462737     794 scope.go:117] "RemoveContainer" containerID="9adb6b17185b6682e8c2c8dd1a99c7a053f8b55e59d713d66db3fa31374bb565"
	Nov 24 04:14:18 old-k8s-version-762702 kubelet[794]: I1124 04:14:18.463124     794 scope.go:117] "RemoveContainer" containerID="7269e4ab74c239334a8c02dde1ef62edd1a14e60113f1a1f9297857da30b33dd"
	Nov 24 04:14:18 old-k8s-version-762702 kubelet[794]: E1124 04:14:18.464131     794 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-6zmfl_kubernetes-dashboard(614839c1-1d2d-4342-93dc-0cae816d580f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6zmfl" podUID="614839c1-1d2d-4342-93dc-0cae816d580f"
	Nov 24 04:14:19 old-k8s-version-762702 kubelet[794]: I1124 04:14:19.467050     794 scope.go:117] "RemoveContainer" containerID="7269e4ab74c239334a8c02dde1ef62edd1a14e60113f1a1f9297857da30b33dd"
	Nov 24 04:14:19 old-k8s-version-762702 kubelet[794]: E1124 04:14:19.467329     794 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-6zmfl_kubernetes-dashboard(614839c1-1d2d-4342-93dc-0cae816d580f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6zmfl" podUID="614839c1-1d2d-4342-93dc-0cae816d580f"
	Nov 24 04:14:27 old-k8s-version-762702 kubelet[794]: I1124 04:14:27.485415     794 scope.go:117] "RemoveContainer" containerID="0baddab83fc97881a782442c419e631b24a2e0920b5bbc40571b6ca47409b609"
	Nov 24 04:14:28 old-k8s-version-762702 kubelet[794]: I1124 04:14:28.125139     794 scope.go:117] "RemoveContainer" containerID="7269e4ab74c239334a8c02dde1ef62edd1a14e60113f1a1f9297857da30b33dd"
	Nov 24 04:14:28 old-k8s-version-762702 kubelet[794]: I1124 04:14:28.493269     794 scope.go:117] "RemoveContainer" containerID="7269e4ab74c239334a8c02dde1ef62edd1a14e60113f1a1f9297857da30b33dd"
	Nov 24 04:14:29 old-k8s-version-762702 kubelet[794]: I1124 04:14:29.497384     794 scope.go:117] "RemoveContainer" containerID="b89b8f495cfe78a41475c2ab6476b4f7445f50c81d516ab1dd5a3fd23f6c3420"
	Nov 24 04:14:29 old-k8s-version-762702 kubelet[794]: E1124 04:14:29.497679     794 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-6zmfl_kubernetes-dashboard(614839c1-1d2d-4342-93dc-0cae816d580f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6zmfl" podUID="614839c1-1d2d-4342-93dc-0cae816d580f"
	Nov 24 04:14:38 old-k8s-version-762702 kubelet[794]: I1124 04:14:38.125194     794 scope.go:117] "RemoveContainer" containerID="b89b8f495cfe78a41475c2ab6476b4f7445f50c81d516ab1dd5a3fd23f6c3420"
	Nov 24 04:14:38 old-k8s-version-762702 kubelet[794]: E1124 04:14:38.125503     794 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-6zmfl_kubernetes-dashboard(614839c1-1d2d-4342-93dc-0cae816d580f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6zmfl" podUID="614839c1-1d2d-4342-93dc-0cae816d580f"
	Nov 24 04:14:45 old-k8s-version-762702 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 04:14:45 old-k8s-version-762702 kubelet[794]: I1124 04:14:45.209658     794 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 24 04:14:45 old-k8s-version-762702 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 04:14:45 old-k8s-version-762702 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [0b878cc21f861b10f2b465a433c6552197db070682805f4c2674b0aa81bf3844] <==
	2025/11/24 04:14:12 Using namespace: kubernetes-dashboard
	2025/11/24 04:14:12 Using in-cluster config to connect to apiserver
	2025/11/24 04:14:12 Using secret token for csrf signing
	2025/11/24 04:14:12 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 04:14:12 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 04:14:12 Successful initial request to the apiserver, version: v1.28.0
	2025/11/24 04:14:12 Generating JWE encryption key
	2025/11/24 04:14:12 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 04:14:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 04:14:13 Initializing JWE encryption key from synchronized object
	2025/11/24 04:14:13 Creating in-cluster Sidecar client
	2025/11/24 04:14:13 Serving insecurely on HTTP port: 9090
	2025/11/24 04:14:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 04:14:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 04:14:12 Starting overwatch
	
	
	==> storage-provisioner [0baddab83fc97881a782442c419e631b24a2e0920b5bbc40571b6ca47409b609] <==
	I1124 04:13:56.905568       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 04:14:26.927961       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [6d5b65108d13f59e0a0172242b6877e1aa183242ced52c6bb7d03102dd8bc068] <==
	I1124 04:14:27.534089       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 04:14:27.550122       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 04:14:27.550183       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1124 04:14:44.949000       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 04:14:44.949276       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-762702_530938c6-ad9a-4601-8127-df869a61a610!
	I1124 04:14:44.950573       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cfe1276a-d502-46a4-811c-2d6200e130b0", APIVersion:"v1", ResourceVersion:"659", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-762702_530938c6-ad9a-4601-8127-df869a61a610 became leader
	I1124 04:14:45.054387       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-762702_530938c6-ad9a-4601-8127-df869a61a610!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-762702 -n old-k8s-version-762702
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-762702 -n old-k8s-version-762702: exit status 2 (435.230921ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-762702 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-600301 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-600301 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (280.080061ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:16:20Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-600301 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-600301 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-600301 describe deploy/metrics-server -n kube-system: exit status 1 (89.21821ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-600301 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-600301
helpers_test.go:243: (dbg) docker inspect no-preload-600301:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49ddc9e82ab9d43934cea4430c4bc27f0a6202c57efb037930223131a0eb594c",
	        "Created": "2025-11-24T04:14:55.518156491Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 476816,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T04:14:55.600827001Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fbb44bc62521f331457dff002aaa5e1e27856f9e53853b3b3ee62969be454028",
	        "ResolvConfPath": "/var/lib/docker/containers/49ddc9e82ab9d43934cea4430c4bc27f0a6202c57efb037930223131a0eb594c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49ddc9e82ab9d43934cea4430c4bc27f0a6202c57efb037930223131a0eb594c/hostname",
	        "HostsPath": "/var/lib/docker/containers/49ddc9e82ab9d43934cea4430c4bc27f0a6202c57efb037930223131a0eb594c/hosts",
	        "LogPath": "/var/lib/docker/containers/49ddc9e82ab9d43934cea4430c4bc27f0a6202c57efb037930223131a0eb594c/49ddc9e82ab9d43934cea4430c4bc27f0a6202c57efb037930223131a0eb594c-json.log",
	        "Name": "/no-preload-600301",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-600301:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-600301",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "49ddc9e82ab9d43934cea4430c4bc27f0a6202c57efb037930223131a0eb594c",
	                "LowerDir": "/var/lib/docker/overlay2/eef5958de4b0cc15d3cf1c85d274e91ca573dec4105ed431ccc177b754c82fbb-init/diff:/var/lib/docker/overlay2/d0aa28be488ed1454a92ae6ce1d851cbc3e880fd8a322d63788b1835a346ec13/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eef5958de4b0cc15d3cf1c85d274e91ca573dec4105ed431ccc177b754c82fbb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eef5958de4b0cc15d3cf1c85d274e91ca573dec4105ed431ccc177b754c82fbb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eef5958de4b0cc15d3cf1c85d274e91ca573dec4105ed431ccc177b754c82fbb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-600301",
	                "Source": "/var/lib/docker/volumes/no-preload-600301/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-600301",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-600301",
	                "name.minikube.sigs.k8s.io": "no-preload-600301",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f8212cc56907c844d56e3b85067ff5242bff4a28602908e1a7a905367142bcdf",
	            "SandboxKey": "/var/run/docker/netns/f8212cc56907",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-600301": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:5f:74:bc:be:98",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ebf72ee754bee872530e47e2d8a7a6196e915259be85acc5eb692aa3f4588a34",
	                    "EndpointID": "37b4705797e84b90ffadc753e98ec18b493e6d929df8c3c6947871592099716b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-600301",
	                        "49ddc9e82ab9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-600301 -n no-preload-600301
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-600301 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-600301 logs -n 25: (1.235771109s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-778509 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ ssh     │ -p cilium-778509 sudo crio config                                                                                                                                                                                                             │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │                     │
	│ delete  │ -p cilium-778509                                                                                                                                                                                                                              │ cilium-778509             │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │ 24 Nov 25 04:10 UTC │
	│ start   │ -p force-systemd-env-400958 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-400958  │ jenkins │ v1.37.0 │ 24 Nov 25 04:10 UTC │ 24 Nov 25 04:11 UTC │
	│ delete  │ -p kubernetes-upgrade-207884                                                                                                                                                                                                                  │ kubernetes-upgrade-207884 │ jenkins │ v1.37.0 │ 24 Nov 25 04:11 UTC │ 24 Nov 25 04:11 UTC │
	│ start   │ -p cert-expiration-918798 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-918798    │ jenkins │ v1.37.0 │ 24 Nov 25 04:11 UTC │ 24 Nov 25 04:11 UTC │
	│ delete  │ -p force-systemd-env-400958                                                                                                                                                                                                                   │ force-systemd-env-400958  │ jenkins │ v1.37.0 │ 24 Nov 25 04:11 UTC │ 24 Nov 25 04:11 UTC │
	│ start   │ -p cert-options-967682 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-967682       │ jenkins │ v1.37.0 │ 24 Nov 25 04:11 UTC │ 24 Nov 25 04:12 UTC │
	│ ssh     │ cert-options-967682 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-967682       │ jenkins │ v1.37.0 │ 24 Nov 25 04:12 UTC │ 24 Nov 25 04:12 UTC │
	│ ssh     │ -p cert-options-967682 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-967682       │ jenkins │ v1.37.0 │ 24 Nov 25 04:12 UTC │ 24 Nov 25 04:12 UTC │
	│ delete  │ -p cert-options-967682                                                                                                                                                                                                                        │ cert-options-967682       │ jenkins │ v1.37.0 │ 24 Nov 25 04:12 UTC │ 24 Nov 25 04:12 UTC │
	│ start   │ -p old-k8s-version-762702 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:12 UTC │ 24 Nov 25 04:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-762702 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:13 UTC │                     │
	│ stop    │ -p old-k8s-version-762702 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:13 UTC │ 24 Nov 25 04:13 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-762702 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:13 UTC │ 24 Nov 25 04:13 UTC │
	│ start   │ -p old-k8s-version-762702 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:13 UTC │ 24 Nov 25 04:14 UTC │
	│ image   │ old-k8s-version-762702 image list --format=json                                                                                                                                                                                               │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:14 UTC │
	│ pause   │ -p old-k8s-version-762702 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │                     │
	│ delete  │ -p old-k8s-version-762702                                                                                                                                                                                                                     │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:14 UTC │
	│ delete  │ -p old-k8s-version-762702                                                                                                                                                                                                                     │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:14 UTC │
	│ start   │ -p no-preload-600301 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-600301         │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:16 UTC │
	│ start   │ -p cert-expiration-918798 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-918798    │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:15 UTC │
	│ delete  │ -p cert-expiration-918798                                                                                                                                                                                                                     │ cert-expiration-918798    │ jenkins │ v1.37.0 │ 24 Nov 25 04:15 UTC │ 24 Nov 25 04:15 UTC │
	│ start   │ -p embed-certs-520529 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-520529        │ jenkins │ v1.37.0 │ 24 Nov 25 04:15 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-600301 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-600301         │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 04:15:24
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 04:15:24.376243  480149 out.go:360] Setting OutFile to fd 1 ...
	I1124 04:15:24.376363  480149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:15:24.376369  480149 out.go:374] Setting ErrFile to fd 2...
	I1124 04:15:24.376375  480149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:15:24.376761  480149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 04:15:24.377241  480149 out.go:368] Setting JSON to false
	I1124 04:15:24.378198  480149 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10654,"bootTime":1763947071,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 04:15:24.378299  480149 start.go:143] virtualization:  
	I1124 04:15:24.383362  480149 out.go:179] * [embed-certs-520529] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 04:15:24.387726  480149 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 04:15:24.387791  480149 notify.go:221] Checking for updates...
	I1124 04:15:24.397670  480149 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 04:15:24.401104  480149 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:15:24.404427  480149 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	I1124 04:15:24.407578  480149 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 04:15:24.410765  480149 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 04:15:24.414426  480149 config.go:182] Loaded profile config "no-preload-600301": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:15:24.414621  480149 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 04:15:24.464390  480149 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 04:15:24.464524  480149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:15:24.629523  480149 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-11-24 04:15:24.609989462 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:15:24.629636  480149 docker.go:319] overlay module found
	I1124 04:15:24.633166  480149 out.go:179] * Using the docker driver based on user configuration
	I1124 04:15:24.636179  480149 start.go:309] selected driver: docker
	I1124 04:15:24.636204  480149 start.go:927] validating driver "docker" against <nil>
	I1124 04:15:24.636220  480149 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 04:15:24.637107  480149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:15:24.932456  480149 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-11-24 04:15:24.920980501 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:15:24.932620  480149 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 04:15:24.932862  480149 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 04:15:24.935957  480149 out.go:179] * Using Docker driver with root privileges
	I1124 04:15:24.939116  480149 cni.go:84] Creating CNI manager for ""
	I1124 04:15:24.939252  480149 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:15:24.939268  480149 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 04:15:24.939347  480149 start.go:353] cluster config:
	{Name:embed-certs-520529 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-520529 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:15:24.942636  480149 out.go:179] * Starting "embed-certs-520529" primary control-plane node in "embed-certs-520529" cluster
	I1124 04:15:24.945610  480149 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 04:15:24.948713  480149 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 04:15:24.951713  480149 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:15:24.951780  480149 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 04:15:24.951791  480149 cache.go:65] Caching tarball of preloaded images
	I1124 04:15:24.951873  480149 preload.go:238] Found /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 04:15:24.952639  480149 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 04:15:24.952776  480149 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/config.json ...
	I1124 04:15:24.952815  480149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/config.json: {Name:mk616f6ae6a86a9cefb60375b643f9e550321d3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:15:24.951947  480149 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 04:15:25.038880  480149 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 04:15:25.038902  480149 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 04:15:25.038918  480149 cache.go:243] Successfully downloaded all kic artifacts
	I1124 04:15:25.038950  480149 start.go:360] acquireMachinesLock for embed-certs-520529: {Name:mk545d2cd105b23ef8983ff95cd892d06612a01e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 04:15:25.039174  480149 start.go:364] duration metric: took 204.22µs to acquireMachinesLock for "embed-certs-520529"
	I1124 04:15:25.039223  480149 start.go:93] Provisioning new machine with config: &{Name:embed-certs-520529 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-520529 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 04:15:25.039376  480149 start.go:125] createHost starting for "" (driver="docker")
	I1124 04:15:24.355347  476511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:15:24.382934  476511 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1124 04:15:24.388617  476511 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1124 04:15:24.388647  476511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1124 04:15:24.427979  476511 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1124 04:15:24.454272  476511 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1124 04:15:24.454682  476511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1124 04:15:25.266700  476511 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 04:15:25.276546  476511 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 04:15:25.292357  476511 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 04:15:25.309316  476511 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1124 04:15:25.326322  476511 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 04:15:25.330563  476511 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:15:25.343125  476511 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:15:25.478197  476511 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:15:25.499712  476511 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301 for IP: 192.168.85.2
	I1124 04:15:25.499735  476511 certs.go:195] generating shared ca certs ...
	I1124 04:15:25.499765  476511 certs.go:227] acquiring lock for ca certs: {Name:mk13d1a72b5c7901cf99d88e558f62c6b8512807 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:15:25.499925  476511 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key
	I1124 04:15:25.499975  476511 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key
	I1124 04:15:25.499987  476511 certs.go:257] generating profile certs ...
	I1124 04:15:25.500042  476511 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/client.key
	I1124 04:15:25.500067  476511 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/client.crt with IP's: []
	I1124 04:15:25.730431  476511 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/client.crt ...
	I1124 04:15:25.730479  476511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/client.crt: {Name:mk0e9120e1d1840ec9de17976f3227ec99e68052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:15:25.730663  476511 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/client.key ...
	I1124 04:15:25.730678  476511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/client.key: {Name:mk0a3f9c7f4b7902958121b7b6359b5397f27186 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:15:25.730773  476511 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/apiserver.key.18edfd9e
	I1124 04:15:25.730789  476511 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/apiserver.crt.18edfd9e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 04:15:25.830026  476511 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/apiserver.crt.18edfd9e ...
	I1124 04:15:25.830129  476511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/apiserver.crt.18edfd9e: {Name:mkce54d1732ff319ae823a370bdc7183ed6d86e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:15:25.830343  476511 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/apiserver.key.18edfd9e ...
	I1124 04:15:25.830382  476511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/apiserver.key.18edfd9e: {Name:mk02ac4736c054c709a009580e1627c251433c9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:15:25.830544  476511 certs.go:382] copying /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/apiserver.crt.18edfd9e -> /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/apiserver.crt
	I1124 04:15:25.830676  476511 certs.go:386] copying /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/apiserver.key.18edfd9e -> /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/apiserver.key
	I1124 04:15:25.830765  476511 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/proxy-client.key
	I1124 04:15:25.830817  476511 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/proxy-client.crt with IP's: []
	I1124 04:15:26.094963  476511 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/proxy-client.crt ...
	I1124 04:15:26.094997  476511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/proxy-client.crt: {Name:mkbf9acdd6ad84de15b14573830e78c1b1b21f15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:15:26.095191  476511 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/proxy-client.key ...
	I1124 04:15:26.095206  476511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/proxy-client.key: {Name:mkfaae78f8a39ad2869fbabfbae95e0154388190 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:15:26.095403  476511 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem (1338 bytes)
	W1124 04:15:26.095454  476511 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389_empty.pem, impossibly tiny 0 bytes
	I1124 04:15:26.095469  476511 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 04:15:26.095497  476511 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem (1082 bytes)
	I1124 04:15:26.095527  476511 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem (1123 bytes)
	I1124 04:15:26.095554  476511 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem (1675 bytes)
	I1124 04:15:26.095606  476511 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:15:26.096212  476511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 04:15:26.120855  476511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 04:15:26.144537  476511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 04:15:26.167668  476511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 04:15:26.228918  476511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 04:15:26.248847  476511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 04:15:26.277817  476511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 04:15:26.298251  476511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 04:15:26.324550  476511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 04:15:26.356088  476511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem --> /usr/share/ca-certificates/291389.pem (1338 bytes)
	I1124 04:15:26.395398  476511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /usr/share/ca-certificates/2913892.pem (1708 bytes)
	I1124 04:15:26.428259  476511 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 04:15:26.441107  476511 ssh_runner.go:195] Run: openssl version
	I1124 04:15:26.448099  476511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 04:15:26.456498  476511 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:15:26.460593  476511 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 03:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:15:26.460660  476511 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:15:26.533973  476511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 04:15:26.544821  476511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291389.pem && ln -fs /usr/share/ca-certificates/291389.pem /etc/ssl/certs/291389.pem"
	I1124 04:15:26.553328  476511 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291389.pem
	I1124 04:15:26.557429  476511 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 03:20 /usr/share/ca-certificates/291389.pem
	I1124 04:15:26.557492  476511 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291389.pem
	I1124 04:15:26.607364  476511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291389.pem /etc/ssl/certs/51391683.0"
	I1124 04:15:26.616210  476511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2913892.pem && ln -fs /usr/share/ca-certificates/2913892.pem /etc/ssl/certs/2913892.pem"
	I1124 04:15:26.624940  476511 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2913892.pem
	I1124 04:15:26.629924  476511 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 03:20 /usr/share/ca-certificates/2913892.pem
	I1124 04:15:26.629989  476511 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2913892.pem
	I1124 04:15:26.674151  476511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2913892.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 04:15:26.683129  476511 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 04:15:26.688031  476511 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 04:15:26.688118  476511 kubeadm.go:401] StartCluster: {Name:no-preload-600301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-600301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:15:26.688219  476511 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 04:15:26.688315  476511 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 04:15:26.717430  476511 cri.go:89] found id: ""
	I1124 04:15:26.717523  476511 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 04:15:26.727538  476511 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 04:15:26.735838  476511 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 04:15:26.735925  476511 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 04:15:26.746498  476511 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 04:15:26.746515  476511 kubeadm.go:158] found existing configuration files:
	
	I1124 04:15:26.746595  476511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 04:15:26.755295  476511 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 04:15:26.755381  476511 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 04:15:26.762936  476511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 04:15:26.771233  476511 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 04:15:26.771322  476511 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 04:15:26.778785  476511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 04:15:26.787219  476511 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 04:15:26.787311  476511 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 04:15:26.795015  476511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 04:15:26.803884  476511 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 04:15:26.803978  476511 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 04:15:26.811998  476511 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 04:15:26.862904  476511 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 04:15:26.863390  476511 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 04:15:26.898295  476511 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 04:15:26.898408  476511 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 04:15:26.898494  476511 kubeadm.go:319] OS: Linux
	I1124 04:15:26.898576  476511 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 04:15:26.898649  476511 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 04:15:26.898723  476511 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 04:15:26.898806  476511 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 04:15:26.898891  476511 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 04:15:26.898992  476511 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 04:15:26.899095  476511 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 04:15:26.899158  476511 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 04:15:26.899223  476511 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 04:15:26.984134  476511 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 04:15:26.984281  476511 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 04:15:26.984398  476511 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 04:15:27.014841  476511 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 04:15:27.020711  476511 out.go:252]   - Generating certificates and keys ...
	I1124 04:15:27.020827  476511 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 04:15:27.020914  476511 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 04:15:27.303901  476511 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 04:15:27.718222  476511 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 04:15:27.859287  476511 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 04:15:28.547128  476511 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 04:15:28.894659  476511 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 04:15:28.895456  476511 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-600301] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 04:15:29.084565  476511 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 04:15:29.085184  476511 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-600301] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 04:15:25.043162  480149 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 04:15:25.043426  480149 start.go:159] libmachine.API.Create for "embed-certs-520529" (driver="docker")
	I1124 04:15:25.043475  480149 client.go:173] LocalClient.Create starting
	I1124 04:15:25.043578  480149 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem
	I1124 04:15:25.043631  480149 main.go:143] libmachine: Decoding PEM data...
	I1124 04:15:25.043652  480149 main.go:143] libmachine: Parsing certificate...
	I1124 04:15:25.043718  480149 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem
	I1124 04:15:25.043770  480149 main.go:143] libmachine: Decoding PEM data...
	I1124 04:15:25.043794  480149 main.go:143] libmachine: Parsing certificate...
	I1124 04:15:25.044468  480149 cli_runner.go:164] Run: docker network inspect embed-certs-520529 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 04:15:25.065856  480149 cli_runner.go:211] docker network inspect embed-certs-520529 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 04:15:25.065951  480149 network_create.go:284] running [docker network inspect embed-certs-520529] to gather additional debugging logs...
	I1124 04:15:25.065969  480149 cli_runner.go:164] Run: docker network inspect embed-certs-520529
	W1124 04:15:25.081551  480149 cli_runner.go:211] docker network inspect embed-certs-520529 returned with exit code 1
	I1124 04:15:25.081584  480149 network_create.go:287] error running [docker network inspect embed-certs-520529]: docker network inspect embed-certs-520529: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-520529 not found
	I1124 04:15:25.081599  480149 network_create.go:289] output of [docker network inspect embed-certs-520529]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-520529 not found
	
	** /stderr **
	I1124 04:15:25.081702  480149 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 04:15:25.097947  480149 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-740fb099fccc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:52:7a:9c:b0:4d:41} reservation:<nil>}
	I1124 04:15:25.098323  480149 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4b0f25a7c590 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:53:b3:a1:55:1a} reservation:<nil>}
	I1124 04:15:25.098618  480149 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c1d995330d2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:83:d9:0c:83:10} reservation:<nil>}
	I1124 04:15:25.099098  480149 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400195b5b0}
	I1124 04:15:25.099124  480149 network_create.go:124] attempt to create docker network embed-certs-520529 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1124 04:15:25.099181  480149 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-520529 embed-certs-520529
	I1124 04:15:25.180057  480149 network_create.go:108] docker network embed-certs-520529 192.168.76.0/24 created
	I1124 04:15:25.180087  480149 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-520529" container
	I1124 04:15:25.180159  480149 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 04:15:25.200171  480149 cli_runner.go:164] Run: docker volume create embed-certs-520529 --label name.minikube.sigs.k8s.io=embed-certs-520529 --label created_by.minikube.sigs.k8s.io=true
	I1124 04:15:25.223971  480149 oci.go:103] Successfully created a docker volume embed-certs-520529
	I1124 04:15:25.224067  480149 cli_runner.go:164] Run: docker run --rm --name embed-certs-520529-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-520529 --entrypoint /usr/bin/test -v embed-certs-520529:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 04:15:26.300713  480149 cli_runner.go:217] Completed: docker run --rm --name embed-certs-520529-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-520529 --entrypoint /usr/bin/test -v embed-certs-520529:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib: (1.076611183s)
	I1124 04:15:26.300740  480149 oci.go:107] Successfully prepared a docker volume embed-certs-520529
	I1124 04:15:26.300780  480149 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:15:26.300796  480149 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 04:15:26.300851  480149 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-520529:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 04:15:29.657405  476511 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 04:15:29.900294  476511 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 04:15:30.196077  476511 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 04:15:30.196463  476511 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 04:15:31.065739  476511 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 04:15:31.657719  476511 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 04:15:32.532428  476511 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 04:15:33.735757  476511 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 04:15:33.912114  476511 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 04:15:33.912681  476511 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 04:15:33.915571  476511 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 04:15:33.919095  476511 out.go:252]   - Booting up control plane ...
	I1124 04:15:33.919232  476511 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 04:15:33.919319  476511 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 04:15:33.919391  476511 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 04:15:33.946294  476511 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 04:15:33.946666  476511 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 04:15:33.957681  476511 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 04:15:33.958006  476511 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 04:15:33.958201  476511 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 04:15:34.100807  476511 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 04:15:34.101001  476511 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 04:15:31.267110  480149 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-520529:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (4.966219741s)
	I1124 04:15:31.267145  480149 kic.go:203] duration metric: took 4.966345954s to extract preloaded images to volume ...
	W1124 04:15:31.267291  480149 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 04:15:31.267403  480149 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 04:15:31.348055  480149 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-520529 --name embed-certs-520529 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-520529 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-520529 --network embed-certs-520529 --ip 192.168.76.2 --volume embed-certs-520529:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 04:15:31.693005  480149 cli_runner.go:164] Run: docker container inspect embed-certs-520529 --format={{.State.Running}}
	I1124 04:15:31.716321  480149 cli_runner.go:164] Run: docker container inspect embed-certs-520529 --format={{.State.Status}}
	I1124 04:15:31.737857  480149 cli_runner.go:164] Run: docker exec embed-certs-520529 stat /var/lib/dpkg/alternatives/iptables
	I1124 04:15:31.810550  480149 oci.go:144] the created container "embed-certs-520529" has a running status.
	I1124 04:15:31.810578  480149 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-289526/.minikube/machines/embed-certs-520529/id_rsa...
	I1124 04:15:32.434549  480149 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-289526/.minikube/machines/embed-certs-520529/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 04:15:32.455955  480149 cli_runner.go:164] Run: docker container inspect embed-certs-520529 --format={{.State.Status}}
	I1124 04:15:32.476839  480149 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 04:15:32.476858  480149 kic_runner.go:114] Args: [docker exec --privileged embed-certs-520529 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 04:15:32.540137  480149 cli_runner.go:164] Run: docker container inspect embed-certs-520529 --format={{.State.Status}}
	I1124 04:15:32.568746  480149 machine.go:94] provisionDockerMachine start ...
	I1124 04:15:32.568845  480149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:15:32.588513  480149 main.go:143] libmachine: Using SSH client type: native
	I1124 04:15:32.588888  480149 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33436 <nil> <nil>}
	I1124 04:15:32.588898  480149 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 04:15:32.589604  480149 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 04:15:35.602871  476511 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501524527s
	I1124 04:15:35.608688  476511 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 04:15:35.609030  476511 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1124 04:15:35.609156  476511 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 04:15:35.610010  476511 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 04:15:38.991317  476511 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.380932807s
	I1124 04:15:35.746883  480149 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-520529
	
	I1124 04:15:35.746906  480149 ubuntu.go:182] provisioning hostname "embed-certs-520529"
	I1124 04:15:35.746983  480149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:15:35.782819  480149 main.go:143] libmachine: Using SSH client type: native
	I1124 04:15:35.783255  480149 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33436 <nil> <nil>}
	I1124 04:15:35.783281  480149 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-520529 && echo "embed-certs-520529" | sudo tee /etc/hostname
	I1124 04:15:35.992998  480149 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-520529
	
	I1124 04:15:35.993180  480149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:15:36.026789  480149 main.go:143] libmachine: Using SSH client type: native
	I1124 04:15:36.027129  480149 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33436 <nil> <nil>}
	I1124 04:15:36.027157  480149 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-520529' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-520529/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-520529' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 04:15:36.203310  480149 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 04:15:36.203351  480149 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-289526/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-289526/.minikube}
	I1124 04:15:36.203372  480149 ubuntu.go:190] setting up certificates
	I1124 04:15:36.203381  480149 provision.go:84] configureAuth start
	I1124 04:15:36.203458  480149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-520529
	I1124 04:15:36.233398  480149 provision.go:143] copyHostCerts
	I1124 04:15:36.233479  480149 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem, removing ...
	I1124 04:15:36.233502  480149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem
	I1124 04:15:36.233601  480149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem (1082 bytes)
	I1124 04:15:36.233731  480149 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem, removing ...
	I1124 04:15:36.233747  480149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem
	I1124 04:15:36.233778  480149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem (1123 bytes)
	I1124 04:15:36.233859  480149 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem, removing ...
	I1124 04:15:36.233870  480149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem
	I1124 04:15:36.233907  480149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem (1675 bytes)
	I1124 04:15:36.233983  480149 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem org=jenkins.embed-certs-520529 san=[127.0.0.1 192.168.76.2 embed-certs-520529 localhost minikube]
	I1124 04:15:36.440532  480149 provision.go:177] copyRemoteCerts
	I1124 04:15:36.440599  480149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 04:15:36.440640  480149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:15:36.462666  480149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/embed-certs-520529/id_rsa Username:docker}
	I1124 04:15:36.587831  480149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 04:15:36.615675  480149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1124 04:15:36.646367  480149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 04:15:36.681210  480149 provision.go:87] duration metric: took 477.804442ms to configureAuth
	I1124 04:15:36.681278  480149 ubuntu.go:206] setting minikube options for container-runtime
	I1124 04:15:36.681490  480149 config.go:182] Loaded profile config "embed-certs-520529": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:15:36.681657  480149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:15:36.710722  480149 main.go:143] libmachine: Using SSH client type: native
	I1124 04:15:36.711035  480149 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33436 <nil> <nil>}
	I1124 04:15:36.711050  480149 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 04:15:37.168217  480149 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 04:15:37.168245  480149 machine.go:97] duration metric: took 4.599477744s to provisionDockerMachine
	I1124 04:15:37.168269  480149 client.go:176] duration metric: took 12.124781988s to LocalClient.Create
	I1124 04:15:37.168292  480149 start.go:167] duration metric: took 12.124867593s to libmachine.API.Create "embed-certs-520529"
	I1124 04:15:37.168304  480149 start.go:293] postStartSetup for "embed-certs-520529" (driver="docker")
	I1124 04:15:37.168320  480149 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 04:15:37.168422  480149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 04:15:37.168492  480149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:15:37.197608  480149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/embed-certs-520529/id_rsa Username:docker}
	I1124 04:15:37.313073  480149 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 04:15:37.321417  480149 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 04:15:37.321509  480149 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 04:15:37.321535  480149 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/addons for local assets ...
	I1124 04:15:37.321624  480149 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/files for local assets ...
	I1124 04:15:37.321752  480149 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem -> 2913892.pem in /etc/ssl/certs
	I1124 04:15:37.321909  480149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 04:15:37.332436  480149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:15:37.358365  480149 start.go:296] duration metric: took 190.040932ms for postStartSetup
	I1124 04:15:37.358806  480149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-520529
	I1124 04:15:37.390748  480149 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/config.json ...
	I1124 04:15:37.391035  480149 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 04:15:37.391093  480149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:15:37.427894  480149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/embed-certs-520529/id_rsa Username:docker}
	I1124 04:15:37.543090  480149 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 04:15:37.552337  480149 start.go:128] duration metric: took 12.512943374s to createHost
	I1124 04:15:37.552363  480149 start.go:83] releasing machines lock for "embed-certs-520529", held for 12.513171529s
	I1124 04:15:37.552436  480149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-520529
	I1124 04:15:37.583467  480149 ssh_runner.go:195] Run: cat /version.json
	I1124 04:15:37.583518  480149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:15:37.586576  480149 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 04:15:37.586687  480149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:15:37.624389  480149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/embed-certs-520529/id_rsa Username:docker}
	I1124 04:15:37.631305  480149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/embed-certs-520529/id_rsa Username:docker}
	I1124 04:15:37.754914  480149 ssh_runner.go:195] Run: systemctl --version
	I1124 04:15:37.868834  480149 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 04:15:37.927868  480149 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 04:15:37.932356  480149 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 04:15:37.932449  480149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 04:15:37.977109  480149 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 04:15:37.977161  480149 start.go:496] detecting cgroup driver to use...
	I1124 04:15:37.977197  480149 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 04:15:37.977261  480149 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 04:15:38.007935  480149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 04:15:38.024321  480149 docker.go:218] disabling cri-docker service (if available) ...
	I1124 04:15:38.024412  480149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 04:15:38.052537  480149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 04:15:38.077732  480149 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 04:15:38.283963  480149 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 04:15:38.473516  480149 docker.go:234] disabling docker service ...
	I1124 04:15:38.473595  480149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 04:15:38.515218  480149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 04:15:38.531353  480149 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 04:15:38.724864  480149 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 04:15:38.924265  480149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 04:15:38.965107  480149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 04:15:38.985775  480149 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 04:15:38.985855  480149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:15:39.004435  480149 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 04:15:39.004529  480149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:15:39.015054  480149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:15:39.024951  480149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:15:39.037139  480149 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 04:15:39.051896  480149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:15:39.064967  480149 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:15:39.086565  480149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:15:39.101067  480149 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 04:15:39.110933  480149 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 04:15:39.131060  480149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:15:39.297876  480149 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 04:15:39.510898  480149 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 04:15:39.511006  480149 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 04:15:39.515492  480149 start.go:564] Will wait 60s for crictl version
	I1124 04:15:39.515588  480149 ssh_runner.go:195] Run: which crictl
	I1124 04:15:39.519474  480149 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 04:15:39.550162  480149 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 04:15:39.550290  480149 ssh_runner.go:195] Run: crio --version
	I1124 04:15:39.583950  480149 ssh_runner.go:195] Run: crio --version
	I1124 04:15:39.640741  480149 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 04:15:39.643877  480149 cli_runner.go:164] Run: docker network inspect embed-certs-520529 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 04:15:39.668943  480149 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 04:15:39.673397  480149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:15:39.687317  480149 kubeadm.go:884] updating cluster {Name:embed-certs-520529 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-520529 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 04:15:39.687455  480149 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:15:39.687532  480149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:15:39.756797  480149 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:15:39.756818  480149 crio.go:433] Images already preloaded, skipping extraction
	I1124 04:15:39.756873  480149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:15:39.786381  480149 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:15:39.786401  480149 cache_images.go:86] Images are preloaded, skipping loading
	I1124 04:15:39.786409  480149 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1124 04:15:39.786534  480149 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-520529 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-520529 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 04:15:39.786620  480149 ssh_runner.go:195] Run: crio config
	I1124 04:15:39.862670  480149 cni.go:84] Creating CNI manager for ""
	I1124 04:15:39.862736  480149 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:15:39.862769  480149 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 04:15:39.862822  480149 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-520529 NodeName:embed-certs-520529 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 04:15:39.862976  480149 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-520529"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 04:15:39.863094  480149 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 04:15:39.871451  480149 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 04:15:39.871569  480149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 04:15:39.879225  480149 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1124 04:15:39.892372  480149 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 04:15:39.906111  480149 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1124 04:15:39.919158  480149 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 04:15:39.923006  480149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:15:39.932754  480149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:15:40.109725  480149 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:15:40.140257  480149 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529 for IP: 192.168.76.2
	I1124 04:15:40.140327  480149 certs.go:195] generating shared ca certs ...
	I1124 04:15:40.140357  480149 certs.go:227] acquiring lock for ca certs: {Name:mk13d1a72b5c7901cf99d88e558f62c6b8512807 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:15:40.140539  480149 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key
	I1124 04:15:40.140626  480149 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key
	I1124 04:15:40.140661  480149 certs.go:257] generating profile certs ...
	I1124 04:15:40.140739  480149 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/client.key
	I1124 04:15:40.140778  480149 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/client.crt with IP's: []
	I1124 04:15:40.402399  480149 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/client.crt ...
	I1124 04:15:40.402433  480149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/client.crt: {Name:mkf99572ff23c068b0225769edcf28218f4c0376 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:15:40.402643  480149 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/client.key ...
	I1124 04:15:40.402659  480149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/client.key: {Name:mk937bd339a3214e1fdec522a9d4904337b3bd07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:15:40.402756  480149 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/apiserver.key.be55c4bc
	I1124 04:15:40.402775  480149 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/apiserver.crt.be55c4bc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 04:15:41.093096  480149 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/apiserver.crt.be55c4bc ...
	I1124 04:15:41.093130  480149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/apiserver.crt.be55c4bc: {Name:mk3d209a1f6fc332ac45da25e5e132849686810c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:15:41.093374  480149 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/apiserver.key.be55c4bc ...
	I1124 04:15:41.093392  480149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/apiserver.key.be55c4bc: {Name:mkc466b363c51ff8de263db93a531811ef84e5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:15:41.093507  480149 certs.go:382] copying /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/apiserver.crt.be55c4bc -> /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/apiserver.crt
	I1124 04:15:41.093614  480149 certs.go:386] copying /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/apiserver.key.be55c4bc -> /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/apiserver.key
	I1124 04:15:41.093680  480149 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/proxy-client.key
	I1124 04:15:41.093699  480149 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/proxy-client.crt with IP's: []
	I1124 04:15:41.311892  480149 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/proxy-client.crt ...
	I1124 04:15:41.311926  480149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/proxy-client.crt: {Name:mk9ced3b71f3dd428ea96667f449b4d61badfb00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:15:41.312122  480149 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/proxy-client.key ...
	I1124 04:15:41.312137  480149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/proxy-client.key: {Name:mkdf33dcee5e004a06376ffc0f240cd3cbb1e800 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:15:41.312338  480149 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem (1338 bytes)
	W1124 04:15:41.312384  480149 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389_empty.pem, impossibly tiny 0 bytes
	I1124 04:15:41.312397  480149 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 04:15:41.312425  480149 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem (1082 bytes)
	I1124 04:15:41.312457  480149 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem (1123 bytes)
	I1124 04:15:41.312487  480149 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem (1675 bytes)
	I1124 04:15:41.312535  480149 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:15:41.313143  480149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 04:15:41.342185  480149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 04:15:41.370998  480149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 04:15:41.399672  480149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 04:15:41.432981  480149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 04:15:41.467777  480149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 04:15:41.497174  480149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 04:15:41.525744  480149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 04:15:41.552579  480149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 04:15:41.583978  480149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem --> /usr/share/ca-certificates/291389.pem (1338 bytes)
	I1124 04:15:41.608915  480149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /usr/share/ca-certificates/2913892.pem (1708 bytes)
	I1124 04:15:41.641029  480149 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 04:15:41.660831  480149 ssh_runner.go:195] Run: openssl version
	I1124 04:15:41.671202  480149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2913892.pem && ln -fs /usr/share/ca-certificates/2913892.pem /etc/ssl/certs/2913892.pem"
	I1124 04:15:41.679869  480149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2913892.pem
	I1124 04:15:41.686236  480149 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 03:20 /usr/share/ca-certificates/2913892.pem
	I1124 04:15:41.686329  480149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2913892.pem
	I1124 04:15:41.750930  480149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2913892.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 04:15:41.763641  480149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 04:15:41.776100  480149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:15:41.780233  480149 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 03:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:15:41.780316  480149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:15:41.844347  480149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 04:15:41.854071  480149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291389.pem && ln -fs /usr/share/ca-certificates/291389.pem /etc/ssl/certs/291389.pem"
	I1124 04:15:41.870956  480149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291389.pem
	I1124 04:15:41.874858  480149 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 03:20 /usr/share/ca-certificates/291389.pem
	I1124 04:15:41.874939  480149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291389.pem
	I1124 04:15:41.923917  480149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291389.pem /etc/ssl/certs/51391683.0"
	I1124 04:15:41.934206  480149 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 04:15:41.938144  480149 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 04:15:41.938212  480149 kubeadm.go:401] StartCluster: {Name:embed-certs-520529 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-520529 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:15:41.938294  480149 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 04:15:41.938366  480149 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 04:15:41.976074  480149 cri.go:89] found id: ""
	I1124 04:15:41.976187  480149 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 04:15:41.985413  480149 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 04:15:41.993433  480149 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 04:15:41.993499  480149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 04:15:42.003602  480149 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 04:15:42.003629  480149 kubeadm.go:158] found existing configuration files:
	
	I1124 04:15:42.003695  480149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 04:15:42.015459  480149 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 04:15:42.015556  480149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 04:15:42.025097  480149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 04:15:42.035177  480149 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 04:15:42.035263  480149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 04:15:42.046066  480149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 04:15:42.055550  480149 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 04:15:42.055630  480149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 04:15:42.067734  480149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 04:15:42.086144  480149 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 04:15:42.086233  480149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 04:15:42.096162  480149 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 04:15:42.230853  480149 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 04:15:42.241397  480149 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 04:15:42.296149  480149 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 04:15:42.296236  480149 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 04:15:42.296283  480149 kubeadm.go:319] OS: Linux
	I1124 04:15:42.296334  480149 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 04:15:42.296395  480149 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 04:15:42.296460  480149 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 04:15:42.296521  480149 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 04:15:42.296583  480149 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 04:15:42.296664  480149 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 04:15:42.296716  480149 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 04:15:42.296774  480149 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 04:15:42.296830  480149 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 04:15:42.385144  480149 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 04:15:42.385341  480149 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 04:15:42.385474  480149 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 04:15:42.398888  480149 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 04:15:44.112897  476511 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.503498108s
	I1124 04:15:42.403822  480149 out.go:252]   - Generating certificates and keys ...
	I1124 04:15:42.403924  480149 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 04:15:42.403995  480149 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 04:15:43.081864  480149 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 04:15:43.377042  480149 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 04:15:43.675939  480149 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 04:15:43.741892  480149 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 04:15:43.865241  480149 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 04:15:43.865852  480149 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-520529 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 04:15:45.079355  476511 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 9.461721823s
	I1124 04:15:45.154032  476511 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 04:15:45.211474  476511 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 04:15:45.245807  476511 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 04:15:45.246019  476511 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-600301 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 04:15:45.275460  476511 kubeadm.go:319] [bootstrap-token] Using token: o2xg2g.bmigt2503dzbtru4
	I1124 04:15:45.278632  476511 out.go:252]   - Configuring RBAC rules ...
	I1124 04:15:45.278794  476511 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 04:15:45.301253  476511 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 04:15:45.367443  476511 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 04:15:45.375222  476511 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 04:15:45.389103  476511 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 04:15:45.398747  476511 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 04:15:45.480066  476511 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 04:15:45.918273  476511 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 04:15:46.484304  476511 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 04:15:46.485922  476511 kubeadm.go:319] 
	I1124 04:15:46.486004  476511 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 04:15:46.486010  476511 kubeadm.go:319] 
	I1124 04:15:46.486087  476511 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 04:15:46.486090  476511 kubeadm.go:319] 
	I1124 04:15:46.486122  476511 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 04:15:46.486615  476511 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 04:15:46.486680  476511 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 04:15:46.486686  476511 kubeadm.go:319] 
	I1124 04:15:46.486739  476511 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 04:15:46.486743  476511 kubeadm.go:319] 
	I1124 04:15:46.486791  476511 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 04:15:46.486795  476511 kubeadm.go:319] 
	I1124 04:15:46.486846  476511 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 04:15:46.486921  476511 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 04:15:46.486999  476511 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 04:15:46.487004  476511 kubeadm.go:319] 
	I1124 04:15:46.487353  476511 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 04:15:46.487490  476511 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 04:15:46.487498  476511 kubeadm.go:319] 
	I1124 04:15:46.487780  476511 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token o2xg2g.bmigt2503dzbtru4 \
	I1124 04:15:46.487888  476511 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2194168f8292dc0469a2699229b86460c4906ab7e633f8753935c94ddff37855 \
	I1124 04:15:46.488087  476511 kubeadm.go:319] 	--control-plane 
	I1124 04:15:46.488096  476511 kubeadm.go:319] 
	I1124 04:15:46.488390  476511 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 04:15:46.488399  476511 kubeadm.go:319] 
	I1124 04:15:46.488681  476511 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token o2xg2g.bmigt2503dzbtru4 \
	I1124 04:15:46.488976  476511 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2194168f8292dc0469a2699229b86460c4906ab7e633f8753935c94ddff37855 
	I1124 04:15:46.495019  476511 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 04:15:46.495252  476511 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 04:15:46.495357  476511 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 04:15:46.495375  476511 cni.go:84] Creating CNI manager for ""
	I1124 04:15:46.495381  476511 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:15:46.499747  476511 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 04:15:46.502660  476511 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 04:15:46.509795  476511 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 04:15:46.509813  476511 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 04:15:46.527211  476511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 04:15:46.939021  476511 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 04:15:46.939178  476511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:15:46.939277  476511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-600301 minikube.k8s.io/updated_at=2025_11_24T04_15_46_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=no-preload-600301 minikube.k8s.io/primary=true
	I1124 04:15:47.179442  476511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:15:47.179525  476511 ops.go:34] apiserver oom_adj: -16
	I1124 04:15:47.679526  476511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:15:48.179618  476511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:15:48.680331  476511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:15:49.180156  476511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:15:44.592056  480149 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 04:15:44.592640  480149 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-520529 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 04:15:45.014635  480149 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 04:15:46.419223  480149 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 04:15:47.460740  480149 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 04:15:47.461228  480149 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 04:15:47.561316  480149 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 04:15:48.085636  480149 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 04:15:48.967016  480149 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 04:15:49.472789  480149 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 04:15:50.377042  480149 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 04:15:50.377139  480149 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 04:15:50.377206  480149 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 04:15:49.679913  476511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:15:50.180470  476511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:15:50.488964  476511 kubeadm.go:1114] duration metric: took 3.549859835s to wait for elevateKubeSystemPrivileges
	I1124 04:15:50.488991  476511 kubeadm.go:403] duration metric: took 23.800879313s to StartCluster
	I1124 04:15:50.489009  476511 settings.go:142] acquiring lock: {Name:mkbb4fe81234aed150705ba63a85f83232f71688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:15:50.489073  476511 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:15:50.489724  476511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/kubeconfig: {Name:mkb927d54c1c5489b1562496c31c4a11f46f8c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:15:50.489930  476511 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 04:15:50.490011  476511 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 04:15:50.490233  476511 config.go:182] Loaded profile config "no-preload-600301": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:15:50.490264  476511 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 04:15:50.490319  476511 addons.go:70] Setting storage-provisioner=true in profile "no-preload-600301"
	I1124 04:15:50.490333  476511 addons.go:239] Setting addon storage-provisioner=true in "no-preload-600301"
	I1124 04:15:50.490353  476511 host.go:66] Checking if "no-preload-600301" exists ...
	I1124 04:15:50.490931  476511 cli_runner.go:164] Run: docker container inspect no-preload-600301 --format={{.State.Status}}
	I1124 04:15:50.491364  476511 addons.go:70] Setting default-storageclass=true in profile "no-preload-600301"
	I1124 04:15:50.491395  476511 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-600301"
	I1124 04:15:50.491685  476511 cli_runner.go:164] Run: docker container inspect no-preload-600301 --format={{.State.Status}}
	I1124 04:15:50.493449  476511 out.go:179] * Verifying Kubernetes components...
	I1124 04:15:50.496668  476511 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:15:50.525329  476511 addons.go:239] Setting addon default-storageclass=true in "no-preload-600301"
	I1124 04:15:50.525377  476511 host.go:66] Checking if "no-preload-600301" exists ...
	I1124 04:15:50.525798  476511 cli_runner.go:164] Run: docker container inspect no-preload-600301 --format={{.State.Status}}
	I1124 04:15:50.535095  476511 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 04:15:50.537944  476511 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:15:50.537968  476511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 04:15:50.538036  476511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-600301
	I1124 04:15:50.564203  476511 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 04:15:50.564230  476511 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 04:15:50.564300  476511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-600301
	I1124 04:15:50.600669  476511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33431 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/no-preload-600301/id_rsa Username:docker}
	I1124 04:15:50.603705  476511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33431 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/no-preload-600301/id_rsa Username:docker}
	I1124 04:15:50.937447  476511 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 04:15:50.949136  476511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 04:15:51.024840  476511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:15:51.145511  476511 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:15:51.995863  476511 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.058322054s)
	I1124 04:15:51.995960  476511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.046757063s)
	I1124 04:15:51.996870  476511 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 04:15:52.503282  476511 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-600301" context rescaled to 1 replicas
	I1124 04:15:52.843231  476511 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.697625648s)
	I1124 04:15:52.843937  476511 node_ready.go:35] waiting up to 6m0s for node "no-preload-600301" to be "Ready" ...
	I1124 04:15:52.844178  476511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.819263572s)
	I1124 04:15:52.847512  476511 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1124 04:15:52.850395  476511 addons.go:530] duration metric: took 2.360114229s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1124 04:15:50.380737  480149 out.go:252]   - Booting up control plane ...
	I1124 04:15:50.380846  480149 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 04:15:50.387367  480149 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 04:15:50.388201  480149 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 04:15:50.418243  480149 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 04:15:50.418361  480149 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 04:15:50.429453  480149 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 04:15:50.429581  480149 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 04:15:50.429627  480149 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 04:15:50.731033  480149 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 04:15:50.731171  480149 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 04:15:51.734804  480149 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001489434s
	I1124 04:15:51.736148  480149 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 04:15:51.736715  480149 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1124 04:15:51.737016  480149 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 04:15:51.739008  480149 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1124 04:15:54.847238  476511 node_ready.go:57] node "no-preload-600301" has "Ready":"False" status (will retry)
	W1124 04:15:56.847321  476511 node_ready.go:57] node "no-preload-600301" has "Ready":"False" status (will retry)
	W1124 04:15:58.847386  476511 node_ready.go:57] node "no-preload-600301" has "Ready":"False" status (will retry)
	I1124 04:15:55.830810  480149 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.09193782s
	I1124 04:15:58.658361  480149 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.918918892s
	I1124 04:16:00.273200  480149 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.535723776s
	I1124 04:16:00.339095  480149 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 04:16:00.408293  480149 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 04:16:00.450903  480149 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 04:16:00.451116  480149 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-520529 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 04:16:00.474244  480149 kubeadm.go:319] [bootstrap-token] Using token: rjod61.p0fcua0nil0mmfc1
	I1124 04:16:00.477427  480149 out.go:252]   - Configuring RBAC rules ...
	I1124 04:16:00.477560  480149 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 04:16:00.490832  480149 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 04:16:00.508614  480149 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 04:16:00.524337  480149 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 04:16:00.542265  480149 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 04:16:00.549966  480149 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 04:16:00.681091  480149 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 04:16:01.154370  480149 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 04:16:01.683282  480149 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 04:16:01.684446  480149 kubeadm.go:319] 
	I1124 04:16:01.684540  480149 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 04:16:01.684552  480149 kubeadm.go:319] 
	I1124 04:16:01.684634  480149 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 04:16:01.684643  480149 kubeadm.go:319] 
	I1124 04:16:01.684668  480149 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 04:16:01.684729  480149 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 04:16:01.684783  480149 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 04:16:01.684791  480149 kubeadm.go:319] 
	I1124 04:16:01.684846  480149 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 04:16:01.684853  480149 kubeadm.go:319] 
	I1124 04:16:01.684901  480149 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 04:16:01.684908  480149 kubeadm.go:319] 
	I1124 04:16:01.684961  480149 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 04:16:01.685039  480149 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 04:16:01.685111  480149 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 04:16:01.685119  480149 kubeadm.go:319] 
	I1124 04:16:01.685203  480149 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 04:16:01.685284  480149 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 04:16:01.685291  480149 kubeadm.go:319] 
	I1124 04:16:01.685375  480149 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token rjod61.p0fcua0nil0mmfc1 \
	I1124 04:16:01.685481  480149 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2194168f8292dc0469a2699229b86460c4906ab7e633f8753935c94ddff37855 \
	I1124 04:16:01.685504  480149 kubeadm.go:319] 	--control-plane 
	I1124 04:16:01.685511  480149 kubeadm.go:319] 
	I1124 04:16:01.685597  480149 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 04:16:01.685604  480149 kubeadm.go:319] 
	I1124 04:16:01.685687  480149 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token rjod61.p0fcua0nil0mmfc1 \
	I1124 04:16:01.685801  480149 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2194168f8292dc0469a2699229b86460c4906ab7e633f8753935c94ddff37855 
	I1124 04:16:01.690732  480149 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 04:16:01.691023  480149 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 04:16:01.691144  480149 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 04:16:01.691172  480149 cni.go:84] Creating CNI manager for ""
	I1124 04:16:01.691184  480149 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:16:01.694349  480149 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1124 04:16:01.347256  476511 node_ready.go:57] node "no-preload-600301" has "Ready":"False" status (will retry)
	W1124 04:16:03.347475  476511 node_ready.go:57] node "no-preload-600301" has "Ready":"False" status (will retry)
	I1124 04:16:01.697190  480149 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 04:16:01.701279  480149 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 04:16:01.701303  480149 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 04:16:01.721469  480149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 04:16:02.298728  480149 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 04:16:02.298891  480149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:16:02.298984  480149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-520529 minikube.k8s.io/updated_at=2025_11_24T04_16_02_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=embed-certs-520529 minikube.k8s.io/primary=true
	I1124 04:16:02.542169  480149 ops.go:34] apiserver oom_adj: -16
	I1124 04:16:02.542303  480149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:16:03.043281  480149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:16:03.543293  480149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:16:04.043243  480149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:16:04.542757  480149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:16:05.042617  480149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:16:05.542677  480149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:16:06.042856  480149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:16:06.193041  480149 kubeadm.go:1114] duration metric: took 3.894210209s to wait for elevateKubeSystemPrivileges
	I1124 04:16:06.193077  480149 kubeadm.go:403] duration metric: took 24.254870023s to StartCluster
	I1124 04:16:06.193095  480149 settings.go:142] acquiring lock: {Name:mkbb4fe81234aed150705ba63a85f83232f71688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:16:06.193157  480149 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:16:06.194759  480149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/kubeconfig: {Name:mkb927d54c1c5489b1562496c31c4a11f46f8c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:16:06.195099  480149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 04:16:06.195382  480149 config.go:182] Loaded profile config "embed-certs-520529": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:16:06.195519  480149 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 04:16:06.195598  480149 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-520529"
	I1124 04:16:06.195618  480149 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-520529"
	I1124 04:16:06.195648  480149 host.go:66] Checking if "embed-certs-520529" exists ...
	I1124 04:16:06.196146  480149 cli_runner.go:164] Run: docker container inspect embed-certs-520529 --format={{.State.Status}}
	I1124 04:16:06.196301  480149 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 04:16:06.196633  480149 addons.go:70] Setting default-storageclass=true in profile "embed-certs-520529"
	I1124 04:16:06.196658  480149 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-520529"
	I1124 04:16:06.197006  480149 cli_runner.go:164] Run: docker container inspect embed-certs-520529 --format={{.State.Status}}
	I1124 04:16:06.199997  480149 out.go:179] * Verifying Kubernetes components...
	I1124 04:16:06.203888  480149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:16:06.242044  480149 addons.go:239] Setting addon default-storageclass=true in "embed-certs-520529"
	I1124 04:16:06.242090  480149 host.go:66] Checking if "embed-certs-520529" exists ...
	I1124 04:16:06.244327  480149 cli_runner.go:164] Run: docker container inspect embed-certs-520529 --format={{.State.Status}}
	I1124 04:16:06.246505  480149 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 04:16:06.249656  480149 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:16:06.249679  480149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 04:16:06.249737  480149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:16:06.285525  480149 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 04:16:06.285547  480149 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 04:16:06.285610  480149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:16:06.293979  480149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/embed-certs-520529/id_rsa Username:docker}
	I1124 04:16:06.329237  480149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/embed-certs-520529/id_rsa Username:docker}
	I1124 04:16:06.633387  480149 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 04:16:06.654449  480149 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:16:06.708186  480149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 04:16:06.708289  480149 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:16:07.325477  480149 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1124 04:16:07.327970  480149 node_ready.go:35] waiting up to 6m0s for node "embed-certs-520529" to be "Ready" ...
	I1124 04:16:07.328932  480149 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1124 04:16:05.347812  476511 node_ready.go:57] node "no-preload-600301" has "Ready":"False" status (will retry)
	I1124 04:16:07.347461  476511 node_ready.go:49] node "no-preload-600301" is "Ready"
	I1124 04:16:07.347488  476511 node_ready.go:38] duration metric: took 14.503533768s for node "no-preload-600301" to be "Ready" ...
	I1124 04:16:07.347502  476511 api_server.go:52] waiting for apiserver process to appear ...
	I1124 04:16:07.347564  476511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 04:16:07.376376  476511 api_server.go:72] duration metric: took 16.886412565s to wait for apiserver process to appear ...
	I1124 04:16:07.376401  476511 api_server.go:88] waiting for apiserver healthz status ...
	I1124 04:16:07.376425  476511 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 04:16:07.401915  476511 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 04:16:07.407173  476511 api_server.go:141] control plane version: v1.34.1
	I1124 04:16:07.407205  476511 api_server.go:131] duration metric: took 30.796163ms to wait for apiserver health ...
	I1124 04:16:07.407215  476511 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 04:16:07.416041  476511 system_pods.go:59] 8 kube-system pods found
	I1124 04:16:07.416088  476511 system_pods.go:61] "coredns-66bc5c9577-x6vx6" [f760eed4-9015-4d00-a224-e417f52d2938] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:16:07.416095  476511 system_pods.go:61] "etcd-no-preload-600301" [b23fe25a-20ab-47d6-9771-1505a8aaf295] Running
	I1124 04:16:07.416101  476511 system_pods.go:61] "kindnet-rqpt9" [a7f1c5ad-1407-46d8-9644-72a830d743e0] Running
	I1124 04:16:07.416106  476511 system_pods.go:61] "kube-apiserver-no-preload-600301" [1db2ceaf-3f52-4486-9474-99fbf501425d] Running
	I1124 04:16:07.416111  476511 system_pods.go:61] "kube-controller-manager-no-preload-600301" [5687b2b0-9a55-4872-b7c3-81779518bc55] Running
	I1124 04:16:07.416114  476511 system_pods.go:61] "kube-proxy-bzg2j" [ff549722-c13c-46b4-8ba0-9c34338e030d] Running
	I1124 04:16:07.416118  476511 system_pods.go:61] "kube-scheduler-no-preload-600301" [53ceff81-cfd1-43e5-9754-15d48f6b34db] Running
	I1124 04:16:07.416127  476511 system_pods.go:61] "storage-provisioner" [a6a27bc4-a6cb-46f9-98ca-f1ae25373869] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 04:16:07.416145  476511 system_pods.go:74] duration metric: took 8.915422ms to wait for pod list to return data ...
	I1124 04:16:07.416158  476511 default_sa.go:34] waiting for default service account to be created ...
	I1124 04:16:07.422424  476511 default_sa.go:45] found service account: "default"
	I1124 04:16:07.422532  476511 default_sa.go:55] duration metric: took 6.366064ms for default service account to be created ...
	I1124 04:16:07.422559  476511 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 04:16:07.428552  476511 system_pods.go:86] 8 kube-system pods found
	I1124 04:16:07.428591  476511 system_pods.go:89] "coredns-66bc5c9577-x6vx6" [f760eed4-9015-4d00-a224-e417f52d2938] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:16:07.428598  476511 system_pods.go:89] "etcd-no-preload-600301" [b23fe25a-20ab-47d6-9771-1505a8aaf295] Running
	I1124 04:16:07.428606  476511 system_pods.go:89] "kindnet-rqpt9" [a7f1c5ad-1407-46d8-9644-72a830d743e0] Running
	I1124 04:16:07.428613  476511 system_pods.go:89] "kube-apiserver-no-preload-600301" [1db2ceaf-3f52-4486-9474-99fbf501425d] Running
	I1124 04:16:07.428623  476511 system_pods.go:89] "kube-controller-manager-no-preload-600301" [5687b2b0-9a55-4872-b7c3-81779518bc55] Running
	I1124 04:16:07.428628  476511 system_pods.go:89] "kube-proxy-bzg2j" [ff549722-c13c-46b4-8ba0-9c34338e030d] Running
	I1124 04:16:07.428639  476511 system_pods.go:89] "kube-scheduler-no-preload-600301" [53ceff81-cfd1-43e5-9754-15d48f6b34db] Running
	I1124 04:16:07.428645  476511 system_pods.go:89] "storage-provisioner" [a6a27bc4-a6cb-46f9-98ca-f1ae25373869] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 04:16:07.428666  476511 retry.go:31] will retry after 282.508109ms: missing components: kube-dns
	I1124 04:16:07.717052  476511 system_pods.go:86] 8 kube-system pods found
	I1124 04:16:07.717092  476511 system_pods.go:89] "coredns-66bc5c9577-x6vx6" [f760eed4-9015-4d00-a224-e417f52d2938] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:16:07.717100  476511 system_pods.go:89] "etcd-no-preload-600301" [b23fe25a-20ab-47d6-9771-1505a8aaf295] Running
	I1124 04:16:07.717106  476511 system_pods.go:89] "kindnet-rqpt9" [a7f1c5ad-1407-46d8-9644-72a830d743e0] Running
	I1124 04:16:07.717111  476511 system_pods.go:89] "kube-apiserver-no-preload-600301" [1db2ceaf-3f52-4486-9474-99fbf501425d] Running
	I1124 04:16:07.717116  476511 system_pods.go:89] "kube-controller-manager-no-preload-600301" [5687b2b0-9a55-4872-b7c3-81779518bc55] Running
	I1124 04:16:07.717120  476511 system_pods.go:89] "kube-proxy-bzg2j" [ff549722-c13c-46b4-8ba0-9c34338e030d] Running
	I1124 04:16:07.717124  476511 system_pods.go:89] "kube-scheduler-no-preload-600301" [53ceff81-cfd1-43e5-9754-15d48f6b34db] Running
	I1124 04:16:07.717129  476511 system_pods.go:89] "storage-provisioner" [a6a27bc4-a6cb-46f9-98ca-f1ae25373869] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 04:16:07.717144  476511 retry.go:31] will retry after 273.187193ms: missing components: kube-dns
	I1124 04:16:07.994429  476511 system_pods.go:86] 8 kube-system pods found
	I1124 04:16:07.994491  476511 system_pods.go:89] "coredns-66bc5c9577-x6vx6" [f760eed4-9015-4d00-a224-e417f52d2938] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:16:07.994499  476511 system_pods.go:89] "etcd-no-preload-600301" [b23fe25a-20ab-47d6-9771-1505a8aaf295] Running
	I1124 04:16:07.994512  476511 system_pods.go:89] "kindnet-rqpt9" [a7f1c5ad-1407-46d8-9644-72a830d743e0] Running
	I1124 04:16:07.994547  476511 system_pods.go:89] "kube-apiserver-no-preload-600301" [1db2ceaf-3f52-4486-9474-99fbf501425d] Running
	I1124 04:16:07.994562  476511 system_pods.go:89] "kube-controller-manager-no-preload-600301" [5687b2b0-9a55-4872-b7c3-81779518bc55] Running
	I1124 04:16:07.994567  476511 system_pods.go:89] "kube-proxy-bzg2j" [ff549722-c13c-46b4-8ba0-9c34338e030d] Running
	I1124 04:16:07.994571  476511 system_pods.go:89] "kube-scheduler-no-preload-600301" [53ceff81-cfd1-43e5-9754-15d48f6b34db] Running
	I1124 04:16:07.994576  476511 system_pods.go:89] "storage-provisioner" [a6a27bc4-a6cb-46f9-98ca-f1ae25373869] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 04:16:07.994599  476511 retry.go:31] will retry after 388.475788ms: missing components: kube-dns
	I1124 04:16:08.403880  476511 system_pods.go:86] 8 kube-system pods found
	I1124 04:16:08.403915  476511 system_pods.go:89] "coredns-66bc5c9577-x6vx6" [f760eed4-9015-4d00-a224-e417f52d2938] Running
	I1124 04:16:08.403923  476511 system_pods.go:89] "etcd-no-preload-600301" [b23fe25a-20ab-47d6-9771-1505a8aaf295] Running
	I1124 04:16:08.403927  476511 system_pods.go:89] "kindnet-rqpt9" [a7f1c5ad-1407-46d8-9644-72a830d743e0] Running
	I1124 04:16:08.403932  476511 system_pods.go:89] "kube-apiserver-no-preload-600301" [1db2ceaf-3f52-4486-9474-99fbf501425d] Running
	I1124 04:16:08.403937  476511 system_pods.go:89] "kube-controller-manager-no-preload-600301" [5687b2b0-9a55-4872-b7c3-81779518bc55] Running
	I1124 04:16:08.403942  476511 system_pods.go:89] "kube-proxy-bzg2j" [ff549722-c13c-46b4-8ba0-9c34338e030d] Running
	I1124 04:16:08.403946  476511 system_pods.go:89] "kube-scheduler-no-preload-600301" [53ceff81-cfd1-43e5-9754-15d48f6b34db] Running
	I1124 04:16:08.403953  476511 system_pods.go:89] "storage-provisioner" [a6a27bc4-a6cb-46f9-98ca-f1ae25373869] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 04:16:08.403961  476511 system_pods.go:126] duration metric: took 981.383307ms to wait for k8s-apps to be running ...
	I1124 04:16:08.403975  476511 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 04:16:08.404039  476511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:16:08.427967  476511 system_svc.go:56] duration metric: took 23.981396ms WaitForService to wait for kubelet
	I1124 04:16:08.427999  476511 kubeadm.go:587] duration metric: took 17.938044005s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 04:16:08.428019  476511 node_conditions.go:102] verifying NodePressure condition ...
	I1124 04:16:08.435879  476511 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 04:16:08.435915  476511 node_conditions.go:123] node cpu capacity is 2
	I1124 04:16:08.435928  476511 node_conditions.go:105] duration metric: took 7.904052ms to run NodePressure ...
	I1124 04:16:08.435943  476511 start.go:242] waiting for startup goroutines ...
	I1124 04:16:08.435950  476511 start.go:247] waiting for cluster config update ...
	I1124 04:16:08.435962  476511 start.go:256] writing updated cluster config ...
	I1124 04:16:08.436261  476511 ssh_runner.go:195] Run: rm -f paused
	I1124 04:16:08.443405  476511 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 04:16:08.449781  476511 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x6vx6" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:16:08.455851  476511 pod_ready.go:94] pod "coredns-66bc5c9577-x6vx6" is "Ready"
	I1124 04:16:08.455882  476511 pod_ready.go:86] duration metric: took 6.07445ms for pod "coredns-66bc5c9577-x6vx6" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:16:08.458602  476511 pod_ready.go:83] waiting for pod "etcd-no-preload-600301" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:16:08.465136  476511 pod_ready.go:94] pod "etcd-no-preload-600301" is "Ready"
	I1124 04:16:08.465164  476511 pod_ready.go:86] duration metric: took 6.533409ms for pod "etcd-no-preload-600301" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:16:08.468021  476511 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-600301" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:16:08.473460  476511 pod_ready.go:94] pod "kube-apiserver-no-preload-600301" is "Ready"
	I1124 04:16:08.473490  476511 pod_ready.go:86] duration metric: took 5.44272ms for pod "kube-apiserver-no-preload-600301" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:16:08.476089  476511 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-600301" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:16:08.848339  476511 pod_ready.go:94] pod "kube-controller-manager-no-preload-600301" is "Ready"
	I1124 04:16:08.848370  476511 pod_ready.go:86] duration metric: took 372.253447ms for pod "kube-controller-manager-no-preload-600301" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:16:09.047342  476511 pod_ready.go:83] waiting for pod "kube-proxy-bzg2j" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:16:07.331870  480149 addons.go:530] duration metric: took 1.136342732s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1124 04:16:07.830810  480149 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-520529" context rescaled to 1 replicas
	W1124 04:16:09.331758  480149 node_ready.go:57] node "embed-certs-520529" has "Ready":"False" status (will retry)
	I1124 04:16:09.447802  476511 pod_ready.go:94] pod "kube-proxy-bzg2j" is "Ready"
	I1124 04:16:09.447834  476511 pod_ready.go:86] duration metric: took 400.460496ms for pod "kube-proxy-bzg2j" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:16:09.648927  476511 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-600301" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:16:10.047830  476511 pod_ready.go:94] pod "kube-scheduler-no-preload-600301" is "Ready"
	I1124 04:16:10.047859  476511 pod_ready.go:86] duration metric: took 398.905685ms for pod "kube-scheduler-no-preload-600301" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:16:10.047896  476511 pod_ready.go:40] duration metric: took 1.604433623s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 04:16:10.103224  476511 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 04:16:10.106294  476511 out.go:179] * Done! kubectl is now configured to use "no-preload-600301" cluster and "default" namespace by default
	W1124 04:16:11.832103  480149 node_ready.go:57] node "embed-certs-520529" has "Ready":"False" status (will retry)
	W1124 04:16:14.331714  480149 node_ready.go:57] node "embed-certs-520529" has "Ready":"False" status (will retry)
	W1124 04:16:16.831366  480149 node_ready.go:57] node "embed-certs-520529" has "Ready":"False" status (will retry)
	W1124 04:16:18.831908  480149 node_ready.go:57] node "embed-certs-520529" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 24 04:16:07 no-preload-600301 crio[836]: time="2025-11-24T04:16:07.541972364Z" level=info msg="Created container d34429fccf66b7d49abdb8b67b5c78e555de83ad132f8da11df4d96e6986dcab: kube-system/coredns-66bc5c9577-x6vx6/coredns" id=6c8237a6-3770-4644-bd3d-fb795260c77a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:16:07 no-preload-600301 crio[836]: time="2025-11-24T04:16:07.543460786Z" level=info msg="Starting container: d34429fccf66b7d49abdb8b67b5c78e555de83ad132f8da11df4d96e6986dcab" id=f41359d8-6272-4c67-8358-0a2a2a6120ac name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:16:07 no-preload-600301 crio[836]: time="2025-11-24T04:16:07.545548123Z" level=info msg="Started container" PID=2471 containerID=d34429fccf66b7d49abdb8b67b5c78e555de83ad132f8da11df4d96e6986dcab description=kube-system/coredns-66bc5c9577-x6vx6/coredns id=f41359d8-6272-4c67-8358-0a2a2a6120ac name=/runtime.v1.RuntimeService/StartContainer sandboxID=6d5aee1359eb58f25feefa9448f97343d7f4d5e240d830aac27b169a2d920795
	Nov 24 04:16:10 no-preload-600301 crio[836]: time="2025-11-24T04:16:10.631627847Z" level=info msg="Running pod sandbox: default/busybox/POD" id=aa91ae9c-f491-4176-987e-be3f351b8e11 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 04:16:10 no-preload-600301 crio[836]: time="2025-11-24T04:16:10.631713788Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:16:10 no-preload-600301 crio[836]: time="2025-11-24T04:16:10.636727006Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0d92f9b1d898182de51deda85269c44b07a456dd56e311cceb87236f366a24e9 UID:2e198013-8c34-4d24-aef2-be30a7043011 NetNS:/var/run/netns/0e9917b7-b81b-4546-8e06-0a4a735f2f67 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000cacc8}] Aliases:map[]}"
	Nov 24 04:16:10 no-preload-600301 crio[836]: time="2025-11-24T04:16:10.636762797Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 24 04:16:10 no-preload-600301 crio[836]: time="2025-11-24T04:16:10.652618804Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0d92f9b1d898182de51deda85269c44b07a456dd56e311cceb87236f366a24e9 UID:2e198013-8c34-4d24-aef2-be30a7043011 NetNS:/var/run/netns/0e9917b7-b81b-4546-8e06-0a4a735f2f67 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000cacc8}] Aliases:map[]}"
	Nov 24 04:16:10 no-preload-600301 crio[836]: time="2025-11-24T04:16:10.652765464Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 24 04:16:10 no-preload-600301 crio[836]: time="2025-11-24T04:16:10.656667398Z" level=info msg="Ran pod sandbox 0d92f9b1d898182de51deda85269c44b07a456dd56e311cceb87236f366a24e9 with infra container: default/busybox/POD" id=aa91ae9c-f491-4176-987e-be3f351b8e11 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 04:16:10 no-preload-600301 crio[836]: time="2025-11-24T04:16:10.658778621Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2e3296dc-87e7-4b82-80f3-1d5f15855cb5 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:16:10 no-preload-600301 crio[836]: time="2025-11-24T04:16:10.659088262Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=2e3296dc-87e7-4b82-80f3-1d5f15855cb5 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:16:10 no-preload-600301 crio[836]: time="2025-11-24T04:16:10.659245129Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=2e3296dc-87e7-4b82-80f3-1d5f15855cb5 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:16:10 no-preload-600301 crio[836]: time="2025-11-24T04:16:10.662836478Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e67b1c36-0b08-4f99-a936-991457de4396 name=/runtime.v1.ImageService/PullImage
	Nov 24 04:16:10 no-preload-600301 crio[836]: time="2025-11-24T04:16:10.666987104Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 04:16:12 no-preload-600301 crio[836]: time="2025-11-24T04:16:12.726990166Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=e67b1c36-0b08-4f99-a936-991457de4396 name=/runtime.v1.ImageService/PullImage
	Nov 24 04:16:12 no-preload-600301 crio[836]: time="2025-11-24T04:16:12.727904935Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8e1742c1-6396-450d-bfa9-50b007aaf25d name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:16:12 no-preload-600301 crio[836]: time="2025-11-24T04:16:12.729234421Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=61ec4e0d-9e2e-47ff-95f5-9e60a708c47f name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:16:12 no-preload-600301 crio[836]: time="2025-11-24T04:16:12.735860016Z" level=info msg="Creating container: default/busybox/busybox" id=61be85d8-6876-4af0-90c6-df946404db96 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:16:12 no-preload-600301 crio[836]: time="2025-11-24T04:16:12.735974044Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:16:12 no-preload-600301 crio[836]: time="2025-11-24T04:16:12.740811267Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:16:12 no-preload-600301 crio[836]: time="2025-11-24T04:16:12.741287244Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:16:12 no-preload-600301 crio[836]: time="2025-11-24T04:16:12.756532938Z" level=info msg="Created container f61e36758a19f96ff88de74a2fe1488228d2043138e77db3c67ff780c4544b4e: default/busybox/busybox" id=61be85d8-6876-4af0-90c6-df946404db96 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:16:12 no-preload-600301 crio[836]: time="2025-11-24T04:16:12.75759984Z" level=info msg="Starting container: f61e36758a19f96ff88de74a2fe1488228d2043138e77db3c67ff780c4544b4e" id=d7363c7a-2765-4bfb-a8d4-41d38a904453 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:16:12 no-preload-600301 crio[836]: time="2025-11-24T04:16:12.762187377Z" level=info msg="Started container" PID=2529 containerID=f61e36758a19f96ff88de74a2fe1488228d2043138e77db3c67ff780c4544b4e description=default/busybox/busybox id=d7363c7a-2765-4bfb-a8d4-41d38a904453 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0d92f9b1d898182de51deda85269c44b07a456dd56e311cceb87236f366a24e9
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f61e36758a19f       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago       Running             busybox                   0                   0d92f9b1d8981       busybox                                     default
	d34429fccf66b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago      Running             coredns                   0                   6d5aee1359eb5       coredns-66bc5c9577-x6vx6                    kube-system
	a843502085449       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      14 seconds ago      Running             storage-provisioner       0                   587ebded92fdf       storage-provisioner                         kube-system
	0b91554561bfa       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    25 seconds ago      Running             kindnet-cni               0                   a7376ce86e332       kindnet-rqpt9                               kube-system
	c468f25e3b02e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      30 seconds ago      Running             kube-proxy                0                   c92fbdf067d40       kube-proxy-bzg2j                            kube-system
	8efa0728778a6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      46 seconds ago      Running             kube-apiserver            0                   6f93d2251d480       kube-apiserver-no-preload-600301            kube-system
	83de02210d60d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      46 seconds ago      Running             kube-controller-manager   0                   7471ba708a020       kube-controller-manager-no-preload-600301   kube-system
	cb49432ba2bf2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      46 seconds ago      Running             etcd                      0                   c2896fafa676c       etcd-no-preload-600301                      kube-system
	ccf26d9d63f02       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      46 seconds ago      Running             kube-scheduler            0                   044d30aaa1469       kube-scheduler-no-preload-600301            kube-system
	
	
	==> coredns [d34429fccf66b7d49abdb8b67b5c78e555de83ad132f8da11df4d96e6986dcab] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34509 - 60859 "HINFO IN 3640560071044341542.1254434121878883602. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.037170965s
	
	
	==> describe nodes <==
	Name:               no-preload-600301
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-600301
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=no-preload-600301
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T04_15_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 04:15:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-600301
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 04:16:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 04:16:17 +0000   Mon, 24 Nov 2025 04:15:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 04:16:17 +0000   Mon, 24 Nov 2025 04:15:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 04:16:17 +0000   Mon, 24 Nov 2025 04:15:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 04:16:17 +0000   Mon, 24 Nov 2025 04:16:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-600301
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 304a86241bf1bbb85bd31db5692386d7
	  System UUID:                d1ffab9e-c111-4d9d-8ac8-cb5bfd0ed15c
	  Boot ID:                    e6ca431c-3a35-478f-87f6-f49cc4bc8a65
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-x6vx6                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     32s
	  kube-system                 etcd-no-preload-600301                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         39s
	  kube-system                 kindnet-rqpt9                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      32s
	  kube-system                 kube-apiserver-no-preload-600301             250m (12%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-no-preload-600301    200m (10%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-bzg2j                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-scheduler-no-preload-600301             100m (5%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 30s                kube-proxy       
	  Warning  CgroupV1                 47s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  47s (x8 over 47s)  kubelet          Node no-preload-600301 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node no-preload-600301 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     47s (x8 over 47s)  kubelet          Node no-preload-600301 status is now: NodeHasSufficientPID
	  Normal   Starting                 37s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 36s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  36s                kubelet          Node no-preload-600301 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    36s                kubelet          Node no-preload-600301 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     36s                kubelet          Node no-preload-600301 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           33s                node-controller  Node no-preload-600301 event: Registered Node no-preload-600301 in Controller
	  Normal   NodeReady                15s                kubelet          Node no-preload-600301 status is now: NodeReady
	
	
	==> dmesg <==
	[ +32.185990] overlayfs: idmapped layers are currently not supported
	[Nov24 03:52] overlayfs: idmapped layers are currently not supported
	[Nov24 03:54] overlayfs: idmapped layers are currently not supported
	[Nov24 03:55] overlayfs: idmapped layers are currently not supported
	[Nov24 03:56] overlayfs: idmapped layers are currently not supported
	[Nov24 03:57] overlayfs: idmapped layers are currently not supported
	[  +3.077077] overlayfs: idmapped layers are currently not supported
	[Nov24 03:58] overlayfs: idmapped layers are currently not supported
	[ +17.825126] overlayfs: idmapped layers are currently not supported
	[Nov24 03:59] overlayfs: idmapped layers are currently not supported
	[ +28.164324] overlayfs: idmapped layers are currently not supported
	[Nov24 04:01] overlayfs: idmapped layers are currently not supported
	[Nov24 04:02] overlayfs: idmapped layers are currently not supported
	[Nov24 04:04] overlayfs: idmapped layers are currently not supported
	[Nov24 04:05] overlayfs: idmapped layers are currently not supported
	[Nov24 04:06] overlayfs: idmapped layers are currently not supported
	[Nov24 04:08] overlayfs: idmapped layers are currently not supported
	[Nov24 04:10] overlayfs: idmapped layers are currently not supported
	[Nov24 04:11] overlayfs: idmapped layers are currently not supported
	[ +23.918932] overlayfs: idmapped layers are currently not supported
	[Nov24 04:12] overlayfs: idmapped layers are currently not supported
	[ +35.202347] overlayfs: idmapped layers are currently not supported
	[Nov24 04:13] overlayfs: idmapped layers are currently not supported
	[Nov24 04:15] overlayfs: idmapped layers are currently not supported
	[ +47.476343] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [cb49432ba2bf236da6ee59a931af2a05d321ac62f2930af001cacf95fed375cf] <==
	{"level":"warn","ts":"2025-11-24T04:15:38.883518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:38.939210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:38.986039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:39.014687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:39.059040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:39.075659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:39.113838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:39.139195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:39.158252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:39.199130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:39.235810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:39.259597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:39.281604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:39.308127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:39.337239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:39.360430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:39.384604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:39.395363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:39.418047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:39.450300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:39.479706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:39.513331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:39.555149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:39.581975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:39.717361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46908","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 04:16:22 up  2:58,  0 user,  load average: 3.44, 3.27, 2.77
	Linux no-preload-600301 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0b91554561bfa92d65df235628c07c386f469bfd16e5de5d29198efb3b9db4b4] <==
	I1124 04:15:56.318634       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 04:15:56.318867       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 04:15:56.318999       1 main.go:148] setting mtu 1500 for CNI 
	I1124 04:15:56.319012       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 04:15:56.319021       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T04:15:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 04:15:56.520171       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 04:15:56.520207       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 04:15:56.520216       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 04:15:56.540436       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 04:15:56.720648       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 04:15:56.720726       1 metrics.go:72] Registering metrics
	I1124 04:15:56.720812       1 controller.go:711] "Syncing nftables rules"
	I1124 04:16:06.528823       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 04:16:06.528875       1 main.go:301] handling current node
	I1124 04:16:16.520700       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 04:16:16.520739       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8efa0728778a618a990435f63220fd53c455e8180ee767ae032f763d9664f0e8] <==
	I1124 04:15:41.479544       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 04:15:41.479634       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 04:15:41.488947       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 04:15:41.530913       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 04:15:41.642703       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 04:15:41.652977       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 04:15:41.724733       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 04:15:41.728122       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 04:15:41.892641       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 04:15:41.911767       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 04:15:41.926832       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 04:15:43.543721       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 04:15:43.635770       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 04:15:43.815301       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 04:15:43.831215       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1124 04:15:43.832740       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 04:15:43.850310       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 04:15:44.317676       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 04:15:45.891828       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 04:15:45.916366       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 04:15:45.955247       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 04:15:50.168128       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 04:15:50.268979       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 04:15:50.288781       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 04:15:50.306298       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [83de02210d60ddd74337b6043785cb66bf4047e9937c323252460f8851c388d3] <==
	I1124 04:15:49.467468       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 04:15:49.467521       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 04:15:49.468812       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 04:15:49.477086       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 04:15:49.484450       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 04:15:49.484477       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 04:15:49.484485       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 04:15:49.486947       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 04:15:49.489348       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 04:15:49.505581       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 04:15:49.512926       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 04:15:49.513025       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 04:15:49.513104       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-600301"
	I1124 04:15:49.513153       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1124 04:15:49.513192       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 04:15:49.513221       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 04:15:49.513327       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 04:15:49.514212       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 04:15:49.515126       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 04:15:49.515480       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 04:15:49.515777       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 04:15:49.515863       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 04:15:49.524432       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 04:15:49.542219       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 04:16:09.516248       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c468f25e3b02e416398621587d8bb8374ed30e5bff7e507e1b4444f2b47489cc] <==
	I1124 04:15:51.411577       1 server_linux.go:53] "Using iptables proxy"
	I1124 04:15:51.508106       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 04:15:51.708944       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 04:15:51.708982       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 04:15:51.709073       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 04:15:51.761702       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 04:15:51.761751       1 server_linux.go:132] "Using iptables Proxier"
	I1124 04:15:51.772560       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 04:15:51.772885       1 server.go:527] "Version info" version="v1.34.1"
	I1124 04:15:51.772901       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:15:51.774314       1 config.go:200] "Starting service config controller"
	I1124 04:15:51.774324       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 04:15:51.774390       1 config.go:106] "Starting endpoint slice config controller"
	I1124 04:15:51.774396       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 04:15:51.774410       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 04:15:51.774414       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 04:15:51.775070       1 config.go:309] "Starting node config controller"
	I1124 04:15:51.775079       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 04:15:51.775084       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 04:15:51.875402       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 04:15:51.875411       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 04:15:51.875444       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ccf26d9d63f02947ff0a9e66fd03b9870c6d4041c4f6dc509be0c6d991c72efe] <==
	I1124 04:15:40.934408       1 serving.go:386] Generated self-signed cert in-memory
	I1124 04:15:44.994208       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 04:15:44.998622       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:15:45.008255       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 04:15:45.009878       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 04:15:45.020816       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 04:15:45.009823       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 04:15:45.022293       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 04:15:45.009896       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 04:15:45.056792       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 04:15:45.009916       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 04:15:45.166354       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 04:15:45.223956       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1124 04:15:45.224096       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 04:15:47 no-preload-600301 kubelet[1981]: I1124 04:15:47.354139    1981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-600301" podStartSLOduration=1.3541119400000001 podStartE2EDuration="1.35411194s" podCreationTimestamp="2025-11-24 04:15:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 04:15:47.335109175 +0000 UTC m=+1.549968007" watchObservedRunningTime="2025-11-24 04:15:47.35411194 +0000 UTC m=+1.568970756"
	Nov 24 04:15:49 no-preload-600301 kubelet[1981]: I1124 04:15:49.457400    1981 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 04:15:49 no-preload-600301 kubelet[1981]: I1124 04:15:49.458616    1981 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 04:15:50 no-preload-600301 kubelet[1981]: I1124 04:15:50.472055    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ff549722-c13c-46b4-8ba0-9c34338e030d-kube-proxy\") pod \"kube-proxy-bzg2j\" (UID: \"ff549722-c13c-46b4-8ba0-9c34338e030d\") " pod="kube-system/kube-proxy-bzg2j"
	Nov 24 04:15:50 no-preload-600301 kubelet[1981]: I1124 04:15:50.472572    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff549722-c13c-46b4-8ba0-9c34338e030d-lib-modules\") pod \"kube-proxy-bzg2j\" (UID: \"ff549722-c13c-46b4-8ba0-9c34338e030d\") " pod="kube-system/kube-proxy-bzg2j"
	Nov 24 04:15:50 no-preload-600301 kubelet[1981]: I1124 04:15:50.472606    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kttfm\" (UniqueName: \"kubernetes.io/projected/ff549722-c13c-46b4-8ba0-9c34338e030d-kube-api-access-kttfm\") pod \"kube-proxy-bzg2j\" (UID: \"ff549722-c13c-46b4-8ba0-9c34338e030d\") " pod="kube-system/kube-proxy-bzg2j"
	Nov 24 04:15:50 no-preload-600301 kubelet[1981]: I1124 04:15:50.472628    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a7f1c5ad-1407-46d8-9644-72a830d743e0-cni-cfg\") pod \"kindnet-rqpt9\" (UID: \"a7f1c5ad-1407-46d8-9644-72a830d743e0\") " pod="kube-system/kindnet-rqpt9"
	Nov 24 04:15:50 no-preload-600301 kubelet[1981]: I1124 04:15:50.472647    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7f1c5ad-1407-46d8-9644-72a830d743e0-xtables-lock\") pod \"kindnet-rqpt9\" (UID: \"a7f1c5ad-1407-46d8-9644-72a830d743e0\") " pod="kube-system/kindnet-rqpt9"
	Nov 24 04:15:50 no-preload-600301 kubelet[1981]: I1124 04:15:50.472664    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94khb\" (UniqueName: \"kubernetes.io/projected/a7f1c5ad-1407-46d8-9644-72a830d743e0-kube-api-access-94khb\") pod \"kindnet-rqpt9\" (UID: \"a7f1c5ad-1407-46d8-9644-72a830d743e0\") " pod="kube-system/kindnet-rqpt9"
	Nov 24 04:15:50 no-preload-600301 kubelet[1981]: I1124 04:15:50.472687    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff549722-c13c-46b4-8ba0-9c34338e030d-xtables-lock\") pod \"kube-proxy-bzg2j\" (UID: \"ff549722-c13c-46b4-8ba0-9c34338e030d\") " pod="kube-system/kube-proxy-bzg2j"
	Nov 24 04:15:50 no-preload-600301 kubelet[1981]: I1124 04:15:50.472711    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7f1c5ad-1407-46d8-9644-72a830d743e0-lib-modules\") pod \"kindnet-rqpt9\" (UID: \"a7f1c5ad-1407-46d8-9644-72a830d743e0\") " pod="kube-system/kindnet-rqpt9"
	Nov 24 04:15:50 no-preload-600301 kubelet[1981]: I1124 04:15:50.782282    1981 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 24 04:15:51 no-preload-600301 kubelet[1981]: W1124 04:15:51.039383    1981 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/49ddc9e82ab9d43934cea4430c4bc27f0a6202c57efb037930223131a0eb594c/crio-c92fbdf067d40bdcbeff0c51534d30fd5853e1f648b019ebf458843f03801d63 WatchSource:0}: Error finding container c92fbdf067d40bdcbeff0c51534d30fd5853e1f648b019ebf458843f03801d63: Status 404 returned error can't find the container with id c92fbdf067d40bdcbeff0c51534d30fd5853e1f648b019ebf458843f03801d63
	Nov 24 04:15:52 no-preload-600301 kubelet[1981]: I1124 04:15:52.776999    1981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bzg2j" podStartSLOduration=2.776978684 podStartE2EDuration="2.776978684s" podCreationTimestamp="2025-11-24 04:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 04:15:52.345075764 +0000 UTC m=+6.559934580" watchObservedRunningTime="2025-11-24 04:15:52.776978684 +0000 UTC m=+6.991837508"
	Nov 24 04:16:07 no-preload-600301 kubelet[1981]: I1124 04:16:07.096568    1981 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 04:16:07 no-preload-600301 kubelet[1981]: I1124 04:16:07.132286    1981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-rqpt9" podStartSLOduration=12.136651158 podStartE2EDuration="17.132243922s" podCreationTimestamp="2025-11-24 04:15:50 +0000 UTC" firstStartedPulling="2025-11-24 04:15:51.138331843 +0000 UTC m=+5.353190659" lastFinishedPulling="2025-11-24 04:15:56.133924599 +0000 UTC m=+10.348783423" observedRunningTime="2025-11-24 04:15:56.33933235 +0000 UTC m=+10.554191198" watchObservedRunningTime="2025-11-24 04:16:07.132243922 +0000 UTC m=+21.347102746"
	Nov 24 04:16:07 no-preload-600301 kubelet[1981]: I1124 04:16:07.255315    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f760eed4-9015-4d00-a224-e417f52d2938-config-volume\") pod \"coredns-66bc5c9577-x6vx6\" (UID: \"f760eed4-9015-4d00-a224-e417f52d2938\") " pod="kube-system/coredns-66bc5c9577-x6vx6"
	Nov 24 04:16:07 no-preload-600301 kubelet[1981]: I1124 04:16:07.255437    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lttt4\" (UniqueName: \"kubernetes.io/projected/f760eed4-9015-4d00-a224-e417f52d2938-kube-api-access-lttt4\") pod \"coredns-66bc5c9577-x6vx6\" (UID: \"f760eed4-9015-4d00-a224-e417f52d2938\") " pod="kube-system/coredns-66bc5c9577-x6vx6"
	Nov 24 04:16:07 no-preload-600301 kubelet[1981]: I1124 04:16:07.255463    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a6a27bc4-a6cb-46f9-98ca-f1ae25373869-tmp\") pod \"storage-provisioner\" (UID: \"a6a27bc4-a6cb-46f9-98ca-f1ae25373869\") " pod="kube-system/storage-provisioner"
	Nov 24 04:16:07 no-preload-600301 kubelet[1981]: I1124 04:16:07.255520    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wlmz\" (UniqueName: \"kubernetes.io/projected/a6a27bc4-a6cb-46f9-98ca-f1ae25373869-kube-api-access-7wlmz\") pod \"storage-provisioner\" (UID: \"a6a27bc4-a6cb-46f9-98ca-f1ae25373869\") " pod="kube-system/storage-provisioner"
	Nov 24 04:16:07 no-preload-600301 kubelet[1981]: W1124 04:16:07.461118    1981 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/49ddc9e82ab9d43934cea4430c4bc27f0a6202c57efb037930223131a0eb594c/crio-587ebded92fdfdbc531ed2ebff1fdde4afc409bf28a2b7e5f00ce8b4531f4bc8 WatchSource:0}: Error finding container 587ebded92fdfdbc531ed2ebff1fdde4afc409bf28a2b7e5f00ce8b4531f4bc8: Status 404 returned error can't find the container with id 587ebded92fdfdbc531ed2ebff1fdde4afc409bf28a2b7e5f00ce8b4531f4bc8
	Nov 24 04:16:07 no-preload-600301 kubelet[1981]: W1124 04:16:07.502384    1981 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/49ddc9e82ab9d43934cea4430c4bc27f0a6202c57efb037930223131a0eb594c/crio-6d5aee1359eb58f25feefa9448f97343d7f4d5e240d830aac27b169a2d920795 WatchSource:0}: Error finding container 6d5aee1359eb58f25feefa9448f97343d7f4d5e240d830aac27b169a2d920795: Status 404 returned error can't find the container with id 6d5aee1359eb58f25feefa9448f97343d7f4d5e240d830aac27b169a2d920795
	Nov 24 04:16:08 no-preload-600301 kubelet[1981]: I1124 04:16:08.382116    1981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-x6vx6" podStartSLOduration=18.382095055 podStartE2EDuration="18.382095055s" podCreationTimestamp="2025-11-24 04:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 04:16:08.361418867 +0000 UTC m=+22.576277699" watchObservedRunningTime="2025-11-24 04:16:08.382095055 +0000 UTC m=+22.596953871"
	Nov 24 04:16:10 no-preload-600301 kubelet[1981]: I1124 04:16:10.322497    1981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=18.322449454 podStartE2EDuration="18.322449454s" podCreationTimestamp="2025-11-24 04:15:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 04:16:08.426097369 +0000 UTC m=+22.640956193" watchObservedRunningTime="2025-11-24 04:16:10.322449454 +0000 UTC m=+24.537308278"
	Nov 24 04:16:10 no-preload-600301 kubelet[1981]: I1124 04:16:10.375102    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpvz9\" (UniqueName: \"kubernetes.io/projected/2e198013-8c34-4d24-aef2-be30a7043011-kube-api-access-fpvz9\") pod \"busybox\" (UID: \"2e198013-8c34-4d24-aef2-be30a7043011\") " pod="default/busybox"
	
	
	==> storage-provisioner [a843502085449d0904f0b535534e5429480b23a709c7028be2fb3dc80e7c8252] <==
	I1124 04:16:07.549491       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 04:16:07.576621       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 04:16:07.576667       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 04:16:07.579614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:16:07.588154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 04:16:07.588369       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 04:16:07.590894       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"158a062c-c5ad-4735-ae08-e89f4d9cb5f4", APIVersion:"v1", ResourceVersion:"454", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-600301_7fe26fb6-c8e5-4805-aa40-6ed7f2df9d07 became leader
	I1124 04:16:07.591074       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-600301_7fe26fb6-c8e5-4805-aa40-6ed7f2df9d07!
	W1124 04:16:07.635129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:16:07.640194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 04:16:07.692384       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-600301_7fe26fb6-c8e5-4805-aa40-6ed7f2df9d07!
	W1124 04:16:09.643696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:16:09.649182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:16:11.652334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:16:11.656990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:16:13.659831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:16:13.664410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:16:15.667990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:16:15.676472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:16:17.680775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:16:17.687507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:16:19.690517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:16:19.694816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:16:21.698040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:16:21.704332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-600301 -n no-preload-600301
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-600301 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (4.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-520529 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-520529 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (683.039615ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:17:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-520529 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-520529 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-520529 describe deploy/metrics-server -n kube-system: exit status 1 (117.559828ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-520529 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-520529
helpers_test.go:243: (dbg) docker inspect embed-certs-520529:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8a3eb121088a2884f162896a1fbffe937d27ff6bf1c385a43a0e9edc1839c5eb",
	        "Created": "2025-11-24T04:15:31.362300869Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 480771,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T04:15:31.433219045Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fbb44bc62521f331457dff002aaa5e1e27856f9e53853b3b3ee62969be454028",
	        "ResolvConfPath": "/var/lib/docker/containers/8a3eb121088a2884f162896a1fbffe937d27ff6bf1c385a43a0e9edc1839c5eb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8a3eb121088a2884f162896a1fbffe937d27ff6bf1c385a43a0e9edc1839c5eb/hostname",
	        "HostsPath": "/var/lib/docker/containers/8a3eb121088a2884f162896a1fbffe937d27ff6bf1c385a43a0e9edc1839c5eb/hosts",
	        "LogPath": "/var/lib/docker/containers/8a3eb121088a2884f162896a1fbffe937d27ff6bf1c385a43a0e9edc1839c5eb/8a3eb121088a2884f162896a1fbffe937d27ff6bf1c385a43a0e9edc1839c5eb-json.log",
	        "Name": "/embed-certs-520529",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-520529:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-520529",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8a3eb121088a2884f162896a1fbffe937d27ff6bf1c385a43a0e9edc1839c5eb",
	                "LowerDir": "/var/lib/docker/overlay2/802b4ddd893465d41da7d4aef59a4908de4bca3ef59f3154a91d2e1417b23762-init/diff:/var/lib/docker/overlay2/d0aa28be488ed1454a92ae6ce1d851cbc3e880fd8a322d63788b1835a346ec13/diff",
	                "MergedDir": "/var/lib/docker/overlay2/802b4ddd893465d41da7d4aef59a4908de4bca3ef59f3154a91d2e1417b23762/merged",
	                "UpperDir": "/var/lib/docker/overlay2/802b4ddd893465d41da7d4aef59a4908de4bca3ef59f3154a91d2e1417b23762/diff",
	                "WorkDir": "/var/lib/docker/overlay2/802b4ddd893465d41da7d4aef59a4908de4bca3ef59f3154a91d2e1417b23762/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-520529",
	                "Source": "/var/lib/docker/volumes/embed-certs-520529/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-520529",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-520529",
	                "name.minikube.sigs.k8s.io": "embed-certs-520529",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cd7d9d84c8d2be96067458121e4866f99f566565ba90ce4fd1c8f30f8f6c1947",
	            "SandboxKey": "/var/run/docker/netns/cd7d9d84c8d2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-520529": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:90:93:58:e7:05",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e3e6fa2232739e2881841760b0f4ae6184afdbd9df8a88d4c082b05eeb608469",
	                    "EndpointID": "a265e619b5a8a30de319d14db7c9205b8fea6914cc01725ba21170a66ab73113",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-520529",
	                        "8a3eb121088a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-520529 -n embed-certs-520529
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-520529 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-520529 logs -n 25: (1.883547933s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p kubernetes-upgrade-207884                                                                                                                                                                                                                  │ kubernetes-upgrade-207884 │ jenkins │ v1.37.0 │ 24 Nov 25 04:11 UTC │ 24 Nov 25 04:11 UTC │
	│ start   │ -p cert-expiration-918798 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-918798    │ jenkins │ v1.37.0 │ 24 Nov 25 04:11 UTC │ 24 Nov 25 04:11 UTC │
	│ delete  │ -p force-systemd-env-400958                                                                                                                                                                                                                   │ force-systemd-env-400958  │ jenkins │ v1.37.0 │ 24 Nov 25 04:11 UTC │ 24 Nov 25 04:11 UTC │
	│ start   │ -p cert-options-967682 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-967682       │ jenkins │ v1.37.0 │ 24 Nov 25 04:11 UTC │ 24 Nov 25 04:12 UTC │
	│ ssh     │ cert-options-967682 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-967682       │ jenkins │ v1.37.0 │ 24 Nov 25 04:12 UTC │ 24 Nov 25 04:12 UTC │
	│ ssh     │ -p cert-options-967682 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-967682       │ jenkins │ v1.37.0 │ 24 Nov 25 04:12 UTC │ 24 Nov 25 04:12 UTC │
	│ delete  │ -p cert-options-967682                                                                                                                                                                                                                        │ cert-options-967682       │ jenkins │ v1.37.0 │ 24 Nov 25 04:12 UTC │ 24 Nov 25 04:12 UTC │
	│ start   │ -p old-k8s-version-762702 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:12 UTC │ 24 Nov 25 04:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-762702 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:13 UTC │                     │
	│ stop    │ -p old-k8s-version-762702 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:13 UTC │ 24 Nov 25 04:13 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-762702 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:13 UTC │ 24 Nov 25 04:13 UTC │
	│ start   │ -p old-k8s-version-762702 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:13 UTC │ 24 Nov 25 04:14 UTC │
	│ image   │ old-k8s-version-762702 image list --format=json                                                                                                                                                                                               │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:14 UTC │
	│ pause   │ -p old-k8s-version-762702 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │                     │
	│ delete  │ -p old-k8s-version-762702                                                                                                                                                                                                                     │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:14 UTC │
	│ delete  │ -p old-k8s-version-762702                                                                                                                                                                                                                     │ old-k8s-version-762702    │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:14 UTC │
	│ start   │ -p no-preload-600301 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-600301         │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:16 UTC │
	│ start   │ -p cert-expiration-918798 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-918798    │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:15 UTC │
	│ delete  │ -p cert-expiration-918798                                                                                                                                                                                                                     │ cert-expiration-918798    │ jenkins │ v1.37.0 │ 24 Nov 25 04:15 UTC │ 24 Nov 25 04:15 UTC │
	│ start   │ -p embed-certs-520529 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-520529        │ jenkins │ v1.37.0 │ 24 Nov 25 04:15 UTC │ 24 Nov 25 04:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-600301 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-600301         │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │                     │
	│ stop    │ -p no-preload-600301 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-600301         │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │ 24 Nov 25 04:16 UTC │
	│ addons  │ enable dashboard -p no-preload-600301 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-600301         │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │ 24 Nov 25 04:16 UTC │
	│ start   │ -p no-preload-600301 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-600301         │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-520529 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-520529        │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 04:16:35
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 04:16:35.304981  484296 out.go:360] Setting OutFile to fd 1 ...
	I1124 04:16:35.305123  484296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:16:35.305135  484296 out.go:374] Setting ErrFile to fd 2...
	I1124 04:16:35.305165  484296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:16:35.305439  484296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 04:16:35.305818  484296 out.go:368] Setting JSON to false
	I1124 04:16:35.306920  484296 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10725,"bootTime":1763947071,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 04:16:35.306984  484296 start.go:143] virtualization:  
	I1124 04:16:35.310137  484296 out.go:179] * [no-preload-600301] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 04:16:35.313874  484296 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 04:16:35.314009  484296 notify.go:221] Checking for updates...
	I1124 04:16:35.319876  484296 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 04:16:35.323029  484296 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:16:35.326070  484296 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	I1124 04:16:35.329600  484296 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 04:16:35.332460  484296 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 04:16:35.335978  484296 config.go:182] Loaded profile config "no-preload-600301": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:16:35.336597  484296 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 04:16:35.357677  484296 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 04:16:35.357787  484296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:16:35.423149  484296 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 04:16:35.413690943 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:16:35.423260  484296 docker.go:319] overlay module found
	I1124 04:16:35.426710  484296 out.go:179] * Using the docker driver based on existing profile
	I1124 04:16:35.429755  484296 start.go:309] selected driver: docker
	I1124 04:16:35.429775  484296 start.go:927] validating driver "docker" against &{Name:no-preload-600301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-600301 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:16:35.429896  484296 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 04:16:35.430723  484296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:16:35.487996  484296 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 04:16:35.478406024 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:16:35.488324  484296 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 04:16:35.488349  484296 cni.go:84] Creating CNI manager for ""
	I1124 04:16:35.488397  484296 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:16:35.488436  484296 start.go:353] cluster config:
	{Name:no-preload-600301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-600301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:16:35.493485  484296 out.go:179] * Starting "no-preload-600301" primary control-plane node in "no-preload-600301" cluster
	I1124 04:16:35.496358  484296 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 04:16:35.499334  484296 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 04:16:35.502276  484296 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:16:35.502293  484296 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 04:16:35.502427  484296 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/config.json ...
	I1124 04:16:35.502668  484296 cache.go:107] acquiring lock: {Name:mka4a2f4583eceee4d0d2e2fa3203183a50aff31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 04:16:35.502757  484296 cache.go:115] /home/jenkins/minikube-integration/21975-289526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1124 04:16:35.502774  484296 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21975-289526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 115.949µs
	I1124 04:16:35.502791  484296 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21975-289526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1124 04:16:35.502809  484296 cache.go:107] acquiring lock: {Name:mk82ed7285ecc134cdeb0bc32256bce1461f5db9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 04:16:35.502846  484296 cache.go:115] /home/jenkins/minikube-integration/21975-289526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1124 04:16:35.502855  484296 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21975-289526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 47.59µs
	I1124 04:16:35.502862  484296 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21975-289526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1124 04:16:35.502872  484296 cache.go:107] acquiring lock: {Name:mk898e92068bf2d94025f1c5924830837ea96337 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 04:16:35.502904  484296 cache.go:115] /home/jenkins/minikube-integration/21975-289526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1124 04:16:35.502910  484296 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21975-289526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 39.065µs
	I1124 04:16:35.502927  484296 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21975-289526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1124 04:16:35.502937  484296 cache.go:107] acquiring lock: {Name:mk3b56d4ebe5e13cc3b2a65d8141d1dac9370d12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 04:16:35.502969  484296 cache.go:115] /home/jenkins/minikube-integration/21975-289526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1124 04:16:35.502974  484296 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21975-289526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 38.598µs
	I1124 04:16:35.502984  484296 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21975-289526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1124 04:16:35.502994  484296 cache.go:107] acquiring lock: {Name:mkb589eec67d50d659ba8fa87de2ae51d0adb72e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 04:16:35.503020  484296 cache.go:115] /home/jenkins/minikube-integration/21975-289526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1124 04:16:35.503025  484296 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21975-289526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 32.681µs
	I1124 04:16:35.503031  484296 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21975-289526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 04:16:35.503041  484296 cache.go:107] acquiring lock: {Name:mk5870ac822407e979041e9a62b9ac853f5ed95f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 04:16:35.503070  484296 cache.go:115] /home/jenkins/minikube-integration/21975-289526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1124 04:16:35.503076  484296 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21975-289526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 36.169µs
	I1124 04:16:35.503095  484296 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21975-289526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1124 04:16:35.503105  484296 cache.go:107] acquiring lock: {Name:mkd327c8aa4b9300d809266a80f9c23af9dbf09b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 04:16:35.503137  484296 cache.go:115] /home/jenkins/minikube-integration/21975-289526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1124 04:16:35.503146  484296 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21975-289526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 42.847µs
	I1124 04:16:35.503152  484296 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21975-289526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1124 04:16:35.503339  484296 cache.go:107] acquiring lock: {Name:mk693719655cf87945ce233f6544254d57c4c585 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 04:16:35.503394  484296 cache.go:115] /home/jenkins/minikube-integration/21975-289526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 04:16:35.503406  484296 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21975-289526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 73.774µs
	I1124 04:16:35.503452  484296 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21975-289526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 04:16:35.503463  484296 cache.go:87] Successfully saved all images to host disk.
	I1124 04:16:35.523845  484296 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 04:16:35.523867  484296 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 04:16:35.523887  484296 cache.go:243] Successfully downloaded all kic artifacts
	I1124 04:16:35.523917  484296 start.go:360] acquireMachinesLock for no-preload-600301: {Name:mk857353e378d8804a59f42afa50417296d6c995 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 04:16:35.523979  484296 start.go:364] duration metric: took 40.099µs to acquireMachinesLock for "no-preload-600301"
	I1124 04:16:35.524002  484296 start.go:96] Skipping create...Using existing machine configuration
	I1124 04:16:35.524009  484296 fix.go:54] fixHost starting: 
	I1124 04:16:35.524275  484296 cli_runner.go:164] Run: docker container inspect no-preload-600301 --format={{.State.Status}}
	I1124 04:16:35.541655  484296 fix.go:112] recreateIfNeeded on no-preload-600301: state=Stopped err=<nil>
	W1124 04:16:35.541685  484296 fix.go:138] unexpected machine state, will restart: <nil>
	W1124 04:16:34.831746  480149 node_ready.go:57] node "embed-certs-520529" has "Ready":"False" status (will retry)
	W1124 04:16:37.331600  480149 node_ready.go:57] node "embed-certs-520529" has "Ready":"False" status (will retry)
	I1124 04:16:35.544931  484296 out.go:252] * Restarting existing docker container for "no-preload-600301" ...
	I1124 04:16:35.545040  484296 cli_runner.go:164] Run: docker start no-preload-600301
	I1124 04:16:35.796252  484296 cli_runner.go:164] Run: docker container inspect no-preload-600301 --format={{.State.Status}}
	I1124 04:16:35.821195  484296 kic.go:430] container "no-preload-600301" state is running.
	I1124 04:16:35.821894  484296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-600301
	I1124 04:16:35.849421  484296 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/config.json ...
	I1124 04:16:35.849645  484296 machine.go:94] provisionDockerMachine start ...
	I1124 04:16:35.849711  484296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-600301
	I1124 04:16:35.871159  484296 main.go:143] libmachine: Using SSH client type: native
	I1124 04:16:35.871516  484296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1124 04:16:35.871533  484296 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 04:16:35.872137  484296 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 04:16:39.022429  484296 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-600301
	
	I1124 04:16:39.022473  484296 ubuntu.go:182] provisioning hostname "no-preload-600301"
	I1124 04:16:39.022550  484296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-600301
	I1124 04:16:39.048727  484296 main.go:143] libmachine: Using SSH client type: native
	I1124 04:16:39.049056  484296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1124 04:16:39.049073  484296 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-600301 && echo "no-preload-600301" | sudo tee /etc/hostname
	I1124 04:16:39.213028  484296 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-600301
	
	I1124 04:16:39.213105  484296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-600301
	I1124 04:16:39.230287  484296 main.go:143] libmachine: Using SSH client type: native
	I1124 04:16:39.230647  484296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1124 04:16:39.230667  484296 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-600301' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-600301/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-600301' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 04:16:39.382739  484296 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 04:16:39.382764  484296 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-289526/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-289526/.minikube}
	I1124 04:16:39.382791  484296 ubuntu.go:190] setting up certificates
	I1124 04:16:39.382808  484296 provision.go:84] configureAuth start
	I1124 04:16:39.382867  484296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-600301
	I1124 04:16:39.399863  484296 provision.go:143] copyHostCerts
	I1124 04:16:39.399945  484296 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem, removing ...
	I1124 04:16:39.399960  484296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem
	I1124 04:16:39.400041  484296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem (1082 bytes)
	I1124 04:16:39.400155  484296 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem, removing ...
	I1124 04:16:39.400167  484296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem
	I1124 04:16:39.400194  484296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem (1123 bytes)
	I1124 04:16:39.400267  484296 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem, removing ...
	I1124 04:16:39.400276  484296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem
	I1124 04:16:39.400305  484296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem (1675 bytes)
	I1124 04:16:39.400367  484296 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem org=jenkins.no-preload-600301 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-600301]
	I1124 04:16:39.621886  484296 provision.go:177] copyRemoteCerts
	I1124 04:16:39.621971  484296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 04:16:39.622028  484296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-600301
	I1124 04:16:39.647372  484296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/no-preload-600301/id_rsa Username:docker}
	I1124 04:16:39.750232  484296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 04:16:39.769789  484296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 04:16:39.789767  484296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 04:16:39.809848  484296 provision.go:87] duration metric: took 427.014756ms to configureAuth
	I1124 04:16:39.809884  484296 ubuntu.go:206] setting minikube options for container-runtime
	I1124 04:16:39.810089  484296 config.go:182] Loaded profile config "no-preload-600301": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:16:39.810197  484296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-600301
	I1124 04:16:39.827119  484296 main.go:143] libmachine: Using SSH client type: native
	I1124 04:16:39.827442  484296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1124 04:16:39.827462  484296 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 04:16:40.239775  484296 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 04:16:40.239820  484296 machine.go:97] duration metric: took 4.390165588s to provisionDockerMachine
	I1124 04:16:40.239838  484296 start.go:293] postStartSetup for "no-preload-600301" (driver="docker")
	I1124 04:16:40.239849  484296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 04:16:40.239952  484296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 04:16:40.240012  484296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-600301
	I1124 04:16:40.266786  484296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/no-preload-600301/id_rsa Username:docker}
	I1124 04:16:40.379351  484296 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 04:16:40.382623  484296 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 04:16:40.382653  484296 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 04:16:40.382665  484296 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/addons for local assets ...
	I1124 04:16:40.382733  484296 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/files for local assets ...
	I1124 04:16:40.382812  484296 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem -> 2913892.pem in /etc/ssl/certs
	I1124 04:16:40.382937  484296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 04:16:40.390578  484296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:16:40.408574  484296 start.go:296] duration metric: took 168.719423ms for postStartSetup
	I1124 04:16:40.408700  484296 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 04:16:40.408763  484296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-600301
	I1124 04:16:40.425937  484296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/no-preload-600301/id_rsa Username:docker}
	I1124 04:16:40.531635  484296 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 04:16:40.536297  484296 fix.go:56] duration metric: took 5.012281276s for fixHost
	I1124 04:16:40.536337  484296 start.go:83] releasing machines lock for "no-preload-600301", held for 5.01234559s
	I1124 04:16:40.536450  484296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-600301
	I1124 04:16:40.556660  484296 ssh_runner.go:195] Run: cat /version.json
	I1124 04:16:40.556718  484296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-600301
	I1124 04:16:40.556965  484296 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 04:16:40.557035  484296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-600301
	I1124 04:16:40.587028  484296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/no-preload-600301/id_rsa Username:docker}
	I1124 04:16:40.592073  484296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/no-preload-600301/id_rsa Username:docker}
	I1124 04:16:40.783147  484296 ssh_runner.go:195] Run: systemctl --version
	I1124 04:16:40.789797  484296 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 04:16:40.827594  484296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 04:16:40.832565  484296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 04:16:40.832661  484296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 04:16:40.841125  484296 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 04:16:40.841147  484296 start.go:496] detecting cgroup driver to use...
	I1124 04:16:40.841209  484296 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 04:16:40.841282  484296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 04:16:40.857018  484296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 04:16:40.869755  484296 docker.go:218] disabling cri-docker service (if available) ...
	I1124 04:16:40.869865  484296 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 04:16:40.886143  484296 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 04:16:40.899934  484296 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 04:16:41.019997  484296 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 04:16:41.141147  484296 docker.go:234] disabling docker service ...
	I1124 04:16:41.141260  484296 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 04:16:41.159005  484296 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 04:16:41.173578  484296 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 04:16:41.297365  484296 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 04:16:41.427254  484296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 04:16:41.441241  484296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 04:16:41.456122  484296 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 04:16:41.456189  484296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:16:41.465054  484296 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 04:16:41.465133  484296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:16:41.474038  484296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:16:41.483236  484296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:16:41.492716  484296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 04:16:41.501180  484296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:16:41.510388  484296 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:16:41.521179  484296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:16:41.530272  484296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 04:16:41.538003  484296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 04:16:41.545714  484296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:16:41.668077  484296 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 04:16:41.852838  484296 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 04:16:41.852906  484296 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 04:16:41.856806  484296 start.go:564] Will wait 60s for crictl version
	I1124 04:16:41.856922  484296 ssh_runner.go:195] Run: which crictl
	I1124 04:16:41.860570  484296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 04:16:41.886262  484296 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 04:16:41.886429  484296 ssh_runner.go:195] Run: crio --version
	I1124 04:16:41.915861  484296 ssh_runner.go:195] Run: crio --version
	I1124 04:16:41.949966  484296 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 04:16:41.952742  484296 cli_runner.go:164] Run: docker network inspect no-preload-600301 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 04:16:41.968163  484296 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 04:16:41.972045  484296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:16:41.981601  484296 kubeadm.go:884] updating cluster {Name:no-preload-600301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-600301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 04:16:41.981719  484296 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:16:41.981764  484296 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:16:42.029621  484296 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:16:42.029652  484296 cache_images.go:86] Images are preloaded, skipping loading
	I1124 04:16:42.029660  484296 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1124 04:16:42.029762  484296 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-600301 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-600301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 04:16:42.029872  484296 ssh_runner.go:195] Run: crio config
	I1124 04:16:42.097393  484296 cni.go:84] Creating CNI manager for ""
	I1124 04:16:42.097421  484296 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:16:42.097444  484296 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 04:16:42.097507  484296 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-600301 NodeName:no-preload-600301 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 04:16:42.097674  484296 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-600301"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 04:16:42.097782  484296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 04:16:42.110438  484296 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 04:16:42.110643  484296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 04:16:42.123965  484296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 04:16:42.142394  484296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 04:16:42.160580  484296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1124 04:16:42.181540  484296 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 04:16:42.186597  484296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:16:42.207403  484296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:16:42.339383  484296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:16:42.358799  484296 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301 for IP: 192.168.85.2
	I1124 04:16:42.358824  484296 certs.go:195] generating shared ca certs ...
	I1124 04:16:42.358843  484296 certs.go:227] acquiring lock for ca certs: {Name:mk13d1a72b5c7901cf99d88e558f62c6b8512807 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:16:42.359073  484296 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key
	I1124 04:16:42.359155  484296 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key
	I1124 04:16:42.359170  484296 certs.go:257] generating profile certs ...
	I1124 04:16:42.359307  484296 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/client.key
	I1124 04:16:42.359401  484296 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/apiserver.key.18edfd9e
	I1124 04:16:42.359473  484296 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/proxy-client.key
	I1124 04:16:42.359651  484296 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem (1338 bytes)
	W1124 04:16:42.359718  484296 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389_empty.pem, impossibly tiny 0 bytes
	I1124 04:16:42.359734  484296 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 04:16:42.359780  484296 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem (1082 bytes)
	I1124 04:16:42.359835  484296 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem (1123 bytes)
	I1124 04:16:42.359871  484296 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem (1675 bytes)
	I1124 04:16:42.359981  484296 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:16:42.360875  484296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 04:16:42.379635  484296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 04:16:42.398441  484296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 04:16:42.417417  484296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 04:16:42.436219  484296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 04:16:42.460213  484296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 04:16:42.479352  484296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 04:16:42.500467  484296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 04:16:42.522996  484296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem --> /usr/share/ca-certificates/291389.pem (1338 bytes)
	I1124 04:16:42.546755  484296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /usr/share/ca-certificates/2913892.pem (1708 bytes)
	I1124 04:16:42.577292  484296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 04:16:42.599531  484296 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 04:16:42.613288  484296 ssh_runner.go:195] Run: openssl version
	I1124 04:16:42.621661  484296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291389.pem && ln -fs /usr/share/ca-certificates/291389.pem /etc/ssl/certs/291389.pem"
	I1124 04:16:42.632095  484296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291389.pem
	I1124 04:16:42.635909  484296 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 03:20 /usr/share/ca-certificates/291389.pem
	I1124 04:16:42.635978  484296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291389.pem
	I1124 04:16:42.680165  484296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291389.pem /etc/ssl/certs/51391683.0"
	I1124 04:16:42.689521  484296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2913892.pem && ln -fs /usr/share/ca-certificates/2913892.pem /etc/ssl/certs/2913892.pem"
	I1124 04:16:42.699153  484296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2913892.pem
	I1124 04:16:42.702909  484296 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 03:20 /usr/share/ca-certificates/2913892.pem
	I1124 04:16:42.702981  484296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2913892.pem
	I1124 04:16:42.744232  484296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2913892.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 04:16:42.752664  484296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 04:16:42.760861  484296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:16:42.764598  484296 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 03:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:16:42.764720  484296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:16:42.806111  484296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 04:16:42.815280  484296 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 04:16:42.819437  484296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 04:16:42.861065  484296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 04:16:42.903395  484296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 04:16:42.957171  484296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 04:16:43.006285  484296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 04:16:43.066711  484296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 04:16:43.121500  484296 kubeadm.go:401] StartCluster: {Name:no-preload-600301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-600301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:16:43.121682  484296 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 04:16:43.121782  484296 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 04:16:43.199721  484296 cri.go:89] found id: ""
	I1124 04:16:43.199837  484296 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 04:16:43.222090  484296 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 04:16:43.222112  484296 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 04:16:43.222182  484296 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 04:16:43.251950  484296 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 04:16:43.253007  484296 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-600301" does not appear in /home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:16:43.253709  484296 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-289526/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-600301" cluster setting kubeconfig missing "no-preload-600301" context setting]
	I1124 04:16:43.254777  484296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/kubeconfig: {Name:mkb927d54c1c5489b1562496c31c4a11f46f8c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:16:43.256909  484296 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 04:16:43.277386  484296 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1124 04:16:43.277431  484296 kubeadm.go:602] duration metric: took 55.312432ms to restartPrimaryControlPlane
	I1124 04:16:43.277442  484296 kubeadm.go:403] duration metric: took 155.966274ms to StartCluster
	I1124 04:16:43.277457  484296 settings.go:142] acquiring lock: {Name:mkbb4fe81234aed150705ba63a85f83232f71688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:16:43.277551  484296 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:16:43.279196  484296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/kubeconfig: {Name:mkb927d54c1c5489b1562496c31c4a11f46f8c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:16:43.279553  484296 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 04:16:43.280026  484296 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 04:16:43.280119  484296 addons.go:70] Setting storage-provisioner=true in profile "no-preload-600301"
	I1124 04:16:43.280138  484296 addons.go:239] Setting addon storage-provisioner=true in "no-preload-600301"
	W1124 04:16:43.280144  484296 addons.go:248] addon storage-provisioner should already be in state true
	I1124 04:16:43.280168  484296 host.go:66] Checking if "no-preload-600301" exists ...
	I1124 04:16:43.280853  484296 cli_runner.go:164] Run: docker container inspect no-preload-600301 --format={{.State.Status}}
	I1124 04:16:43.281253  484296 addons.go:70] Setting dashboard=true in profile "no-preload-600301"
	I1124 04:16:43.281294  484296 addons.go:239] Setting addon dashboard=true in "no-preload-600301"
	W1124 04:16:43.281328  484296 addons.go:248] addon dashboard should already be in state true
	I1124 04:16:43.281376  484296 host.go:66] Checking if "no-preload-600301" exists ...
	I1124 04:16:43.282048  484296 cli_runner.go:164] Run: docker container inspect no-preload-600301 --format={{.State.Status}}
	I1124 04:16:43.282812  484296 config.go:182] Loaded profile config "no-preload-600301": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:16:43.283079  484296 addons.go:70] Setting default-storageclass=true in profile "no-preload-600301"
	I1124 04:16:43.285957  484296 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-600301"
	I1124 04:16:43.290113  484296 cli_runner.go:164] Run: docker container inspect no-preload-600301 --format={{.State.Status}}
	I1124 04:16:43.302748  484296 out.go:179] * Verifying Kubernetes components...
	I1124 04:16:43.309364  484296 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 04:16:43.309567  484296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:16:43.318579  484296 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 04:16:43.323903  484296 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 04:16:43.323938  484296 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 04:16:43.324017  484296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-600301
	I1124 04:16:43.345165  484296 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1124 04:16:39.831247  480149 node_ready.go:57] node "embed-certs-520529" has "Ready":"False" status (will retry)
	W1124 04:16:41.831683  480149 node_ready.go:57] node "embed-certs-520529" has "Ready":"False" status (will retry)
	W1124 04:16:44.332112  480149 node_ready.go:57] node "embed-certs-520529" has "Ready":"False" status (will retry)
	I1124 04:16:43.348166  484296 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:16:43.348191  484296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 04:16:43.348274  484296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-600301
	I1124 04:16:43.351438  484296 addons.go:239] Setting addon default-storageclass=true in "no-preload-600301"
	W1124 04:16:43.351461  484296 addons.go:248] addon default-storageclass should already be in state true
	I1124 04:16:43.351484  484296 host.go:66] Checking if "no-preload-600301" exists ...
	I1124 04:16:43.353229  484296 cli_runner.go:164] Run: docker container inspect no-preload-600301 --format={{.State.Status}}
	I1124 04:16:43.402545  484296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/no-preload-600301/id_rsa Username:docker}
	I1124 04:16:43.430344  484296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/no-preload-600301/id_rsa Username:docker}
	I1124 04:16:43.432794  484296 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 04:16:43.432813  484296 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 04:16:43.432873  484296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-600301
	I1124 04:16:43.469708  484296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/no-preload-600301/id_rsa Username:docker}
	I1124 04:16:43.694118  484296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:16:43.700947  484296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:16:43.715817  484296 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 04:16:43.715844  484296 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 04:16:43.770774  484296 node_ready.go:35] waiting up to 6m0s for node "no-preload-600301" to be "Ready" ...
	I1124 04:16:43.797029  484296 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 04:16:43.797056  484296 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 04:16:43.841135  484296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 04:16:43.903959  484296 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 04:16:43.904033  484296 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 04:16:43.956399  484296 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 04:16:43.956472  484296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 04:16:44.018681  484296 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 04:16:44.018760  484296 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 04:16:44.059904  484296 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 04:16:44.059982  484296 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 04:16:44.090349  484296 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 04:16:44.090429  484296 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 04:16:44.126633  484296 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 04:16:44.126697  484296 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 04:16:44.152811  484296 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 04:16:44.152892  484296 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 04:16:44.168776  484296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1124 04:16:46.831499  480149 node_ready.go:57] node "embed-certs-520529" has "Ready":"False" status (will retry)
	I1124 04:16:47.332713  480149 node_ready.go:49] node "embed-certs-520529" is "Ready"
	I1124 04:16:47.332745  480149 node_ready.go:38] duration metric: took 40.00466499s for node "embed-certs-520529" to be "Ready" ...
	I1124 04:16:47.332812  480149 api_server.go:52] waiting for apiserver process to appear ...
	I1124 04:16:47.332903  480149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 04:16:47.374094  480149 api_server.go:72] duration metric: took 41.177737951s to wait for apiserver process to appear ...
	I1124 04:16:47.374124  480149 api_server.go:88] waiting for apiserver healthz status ...
	I1124 04:16:47.374169  480149 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 04:16:47.392947  480149 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1124 04:16:47.394194  480149 api_server.go:141] control plane version: v1.34.1
	I1124 04:16:47.394218  480149 api_server.go:131] duration metric: took 20.085589ms to wait for apiserver health ...
	I1124 04:16:47.394227  480149 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 04:16:47.399193  480149 system_pods.go:59] 8 kube-system pods found
	I1124 04:16:47.399239  480149 system_pods.go:61] "coredns-66bc5c9577-bvwhr" [afc820fb-a24a-4fb0-b2c9-8c5e2014a762] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:16:47.399284  480149 system_pods.go:61] "etcd-embed-certs-520529" [f26ae428-2218-4bda-9b92-578d85c74df8] Running
	I1124 04:16:47.399301  480149 system_pods.go:61] "kindnet-tkncp" [eccdf0bd-3245-4547-aed3-65ae2e72ed82] Running
	I1124 04:16:47.399308  480149 system_pods.go:61] "kube-apiserver-embed-certs-520529" [d25fe462-1c8b-467f-8c81-4610bd9173c3] Running
	I1124 04:16:47.399319  480149 system_pods.go:61] "kube-controller-manager-embed-certs-520529" [093b15ed-2629-4f07-aacb-21da8fe15032] Running
	I1124 04:16:47.399324  480149 system_pods.go:61] "kube-proxy-dt4th" [47798ce5-c1f5-4f74-a933-76514aee25a3] Running
	I1124 04:16:47.399329  480149 system_pods.go:61] "kube-scheduler-embed-certs-520529" [2a37b8ab-a5c8-45f3-9bc9-3e233a33c05d] Running
	I1124 04:16:47.399375  480149 system_pods.go:61] "storage-provisioner" [bad7a9be-48f5-443b-824e-859f9e21d194] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 04:16:47.399388  480149 system_pods.go:74] duration metric: took 5.155019ms to wait for pod list to return data ...
	I1124 04:16:47.399397  480149 default_sa.go:34] waiting for default service account to be created ...
	I1124 04:16:47.404078  480149 default_sa.go:45] found service account: "default"
	I1124 04:16:47.404110  480149 default_sa.go:55] duration metric: took 4.702853ms for default service account to be created ...
	I1124 04:16:47.404123  480149 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 04:16:47.407878  480149 system_pods.go:86] 8 kube-system pods found
	I1124 04:16:47.407913  480149 system_pods.go:89] "coredns-66bc5c9577-bvwhr" [afc820fb-a24a-4fb0-b2c9-8c5e2014a762] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:16:47.407947  480149 system_pods.go:89] "etcd-embed-certs-520529" [f26ae428-2218-4bda-9b92-578d85c74df8] Running
	I1124 04:16:47.407961  480149 system_pods.go:89] "kindnet-tkncp" [eccdf0bd-3245-4547-aed3-65ae2e72ed82] Running
	I1124 04:16:47.407966  480149 system_pods.go:89] "kube-apiserver-embed-certs-520529" [d25fe462-1c8b-467f-8c81-4610bd9173c3] Running
	I1124 04:16:47.407971  480149 system_pods.go:89] "kube-controller-manager-embed-certs-520529" [093b15ed-2629-4f07-aacb-21da8fe15032] Running
	I1124 04:16:47.407981  480149 system_pods.go:89] "kube-proxy-dt4th" [47798ce5-c1f5-4f74-a933-76514aee25a3] Running
	I1124 04:16:47.407986  480149 system_pods.go:89] "kube-scheduler-embed-certs-520529" [2a37b8ab-a5c8-45f3-9bc9-3e233a33c05d] Running
	I1124 04:16:47.408000  480149 system_pods.go:89] "storage-provisioner" [bad7a9be-48f5-443b-824e-859f9e21d194] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 04:16:47.408051  480149 retry.go:31] will retry after 225.560356ms: missing components: kube-dns
	I1124 04:16:47.666590  480149 system_pods.go:86] 8 kube-system pods found
	I1124 04:16:47.666655  480149 system_pods.go:89] "coredns-66bc5c9577-bvwhr" [afc820fb-a24a-4fb0-b2c9-8c5e2014a762] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:16:47.666679  480149 system_pods.go:89] "etcd-embed-certs-520529" [f26ae428-2218-4bda-9b92-578d85c74df8] Running
	I1124 04:16:47.666688  480149 system_pods.go:89] "kindnet-tkncp" [eccdf0bd-3245-4547-aed3-65ae2e72ed82] Running
	I1124 04:16:47.666698  480149 system_pods.go:89] "kube-apiserver-embed-certs-520529" [d25fe462-1c8b-467f-8c81-4610bd9173c3] Running
	I1124 04:16:47.666709  480149 system_pods.go:89] "kube-controller-manager-embed-certs-520529" [093b15ed-2629-4f07-aacb-21da8fe15032] Running
	I1124 04:16:47.666742  480149 system_pods.go:89] "kube-proxy-dt4th" [47798ce5-c1f5-4f74-a933-76514aee25a3] Running
	I1124 04:16:47.666755  480149 system_pods.go:89] "kube-scheduler-embed-certs-520529" [2a37b8ab-a5c8-45f3-9bc9-3e233a33c05d] Running
	I1124 04:16:47.666762  480149 system_pods.go:89] "storage-provisioner" [bad7a9be-48f5-443b-824e-859f9e21d194] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 04:16:47.666782  480149 retry.go:31] will retry after 302.818014ms: missing components: kube-dns
	I1124 04:16:47.973992  480149 system_pods.go:86] 8 kube-system pods found
	I1124 04:16:47.974074  480149 system_pods.go:89] "coredns-66bc5c9577-bvwhr" [afc820fb-a24a-4fb0-b2c9-8c5e2014a762] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:16:47.974094  480149 system_pods.go:89] "etcd-embed-certs-520529" [f26ae428-2218-4bda-9b92-578d85c74df8] Running
	I1124 04:16:47.974102  480149 system_pods.go:89] "kindnet-tkncp" [eccdf0bd-3245-4547-aed3-65ae2e72ed82] Running
	I1124 04:16:47.974132  480149 system_pods.go:89] "kube-apiserver-embed-certs-520529" [d25fe462-1c8b-467f-8c81-4610bd9173c3] Running
	I1124 04:16:47.974144  480149 system_pods.go:89] "kube-controller-manager-embed-certs-520529" [093b15ed-2629-4f07-aacb-21da8fe15032] Running
	I1124 04:16:47.974149  480149 system_pods.go:89] "kube-proxy-dt4th" [47798ce5-c1f5-4f74-a933-76514aee25a3] Running
	I1124 04:16:47.974153  480149 system_pods.go:89] "kube-scheduler-embed-certs-520529" [2a37b8ab-a5c8-45f3-9bc9-3e233a33c05d] Running
	I1124 04:16:47.974159  480149 system_pods.go:89] "storage-provisioner" [bad7a9be-48f5-443b-824e-859f9e21d194] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 04:16:47.974178  480149 retry.go:31] will retry after 303.058488ms: missing components: kube-dns
	I1124 04:16:48.281557  480149 system_pods.go:86] 8 kube-system pods found
	I1124 04:16:48.281600  480149 system_pods.go:89] "coredns-66bc5c9577-bvwhr" [afc820fb-a24a-4fb0-b2c9-8c5e2014a762] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:16:48.281624  480149 system_pods.go:89] "etcd-embed-certs-520529" [f26ae428-2218-4bda-9b92-578d85c74df8] Running
	I1124 04:16:48.281638  480149 system_pods.go:89] "kindnet-tkncp" [eccdf0bd-3245-4547-aed3-65ae2e72ed82] Running
	I1124 04:16:48.281658  480149 system_pods.go:89] "kube-apiserver-embed-certs-520529" [d25fe462-1c8b-467f-8c81-4610bd9173c3] Running
	I1124 04:16:48.281669  480149 system_pods.go:89] "kube-controller-manager-embed-certs-520529" [093b15ed-2629-4f07-aacb-21da8fe15032] Running
	I1124 04:16:48.281674  480149 system_pods.go:89] "kube-proxy-dt4th" [47798ce5-c1f5-4f74-a933-76514aee25a3] Running
	I1124 04:16:48.281690  480149 system_pods.go:89] "kube-scheduler-embed-certs-520529" [2a37b8ab-a5c8-45f3-9bc9-3e233a33c05d] Running
	I1124 04:16:48.281703  480149 system_pods.go:89] "storage-provisioner" [bad7a9be-48f5-443b-824e-859f9e21d194] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 04:16:48.281718  480149 retry.go:31] will retry after 399.433563ms: missing components: kube-dns
	I1124 04:16:48.686499  480149 system_pods.go:86] 8 kube-system pods found
	I1124 04:16:48.686531  480149 system_pods.go:89] "coredns-66bc5c9577-bvwhr" [afc820fb-a24a-4fb0-b2c9-8c5e2014a762] Running
	I1124 04:16:48.686538  480149 system_pods.go:89] "etcd-embed-certs-520529" [f26ae428-2218-4bda-9b92-578d85c74df8] Running
	I1124 04:16:48.686543  480149 system_pods.go:89] "kindnet-tkncp" [eccdf0bd-3245-4547-aed3-65ae2e72ed82] Running
	I1124 04:16:48.686547  480149 system_pods.go:89] "kube-apiserver-embed-certs-520529" [d25fe462-1c8b-467f-8c81-4610bd9173c3] Running
	I1124 04:16:48.686556  480149 system_pods.go:89] "kube-controller-manager-embed-certs-520529" [093b15ed-2629-4f07-aacb-21da8fe15032] Running
	I1124 04:16:48.686560  480149 system_pods.go:89] "kube-proxy-dt4th" [47798ce5-c1f5-4f74-a933-76514aee25a3] Running
	I1124 04:16:48.686565  480149 system_pods.go:89] "kube-scheduler-embed-certs-520529" [2a37b8ab-a5c8-45f3-9bc9-3e233a33c05d] Running
	I1124 04:16:48.686569  480149 system_pods.go:89] "storage-provisioner" [bad7a9be-48f5-443b-824e-859f9e21d194] Running
	I1124 04:16:48.686587  480149 system_pods.go:126] duration metric: took 1.282423775s to wait for k8s-apps to be running ...
	I1124 04:16:48.686596  480149 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 04:16:48.686653  480149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:16:48.711874  480149 system_svc.go:56] duration metric: took 25.267751ms WaitForService to wait for kubelet
	I1124 04:16:48.711901  480149 kubeadm.go:587] duration metric: took 42.515566614s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 04:16:48.711918  480149 node_conditions.go:102] verifying NodePressure condition ...
	I1124 04:16:48.715890  480149 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 04:16:48.715976  480149 node_conditions.go:123] node cpu capacity is 2
	I1124 04:16:48.716017  480149 node_conditions.go:105] duration metric: took 4.0898ms to run NodePressure ...
	I1124 04:16:48.716047  480149 start.go:242] waiting for startup goroutines ...
	I1124 04:16:48.716074  480149 start.go:247] waiting for cluster config update ...
	I1124 04:16:48.716111  480149 start.go:256] writing updated cluster config ...
	I1124 04:16:48.716447  480149 ssh_runner.go:195] Run: rm -f paused
	I1124 04:16:48.723523  480149 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 04:16:48.727981  480149 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bvwhr" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:16:48.735382  480149 pod_ready.go:94] pod "coredns-66bc5c9577-bvwhr" is "Ready"
	I1124 04:16:48.735460  480149 pod_ready.go:86] duration metric: took 7.454141ms for pod "coredns-66bc5c9577-bvwhr" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:16:48.738552  480149 pod_ready.go:83] waiting for pod "etcd-embed-certs-520529" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:16:48.744399  480149 pod_ready.go:94] pod "etcd-embed-certs-520529" is "Ready"
	I1124 04:16:48.744469  480149 pod_ready.go:86] duration metric: took 5.848033ms for pod "etcd-embed-certs-520529" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:16:48.747356  480149 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-520529" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:16:48.753197  480149 pod_ready.go:94] pod "kube-apiserver-embed-certs-520529" is "Ready"
	I1124 04:16:48.753264  480149 pod_ready.go:86] duration metric: took 5.839582ms for pod "kube-apiserver-embed-certs-520529" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:16:48.756515  480149 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-520529" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:16:49.127981  480149 pod_ready.go:94] pod "kube-controller-manager-embed-certs-520529" is "Ready"
	I1124 04:16:49.128070  480149 pod_ready.go:86] duration metric: took 371.482195ms for pod "kube-controller-manager-embed-certs-520529" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:16:49.328756  480149 pod_ready.go:83] waiting for pod "kube-proxy-dt4th" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:16:49.197158  484296 node_ready.go:49] node "no-preload-600301" is "Ready"
	I1124 04:16:49.197189  484296 node_ready.go:38] duration metric: took 5.426331551s for node "no-preload-600301" to be "Ready" ...
	I1124 04:16:49.197204  484296 api_server.go:52] waiting for apiserver process to appear ...
	I1124 04:16:49.197265  484296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 04:16:49.727769  480149 pod_ready.go:94] pod "kube-proxy-dt4th" is "Ready"
	I1124 04:16:49.727847  480149 pod_ready.go:86] duration metric: took 399.021562ms for pod "kube-proxy-dt4th" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:16:49.928391  480149 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-520529" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:16:50.330504  480149 pod_ready.go:94] pod "kube-scheduler-embed-certs-520529" is "Ready"
	I1124 04:16:50.330623  480149 pod_ready.go:86] duration metric: took 402.146286ms for pod "kube-scheduler-embed-certs-520529" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:16:50.330681  480149 pod_ready.go:40] duration metric: took 1.607125437s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 04:16:50.459566  480149 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 04:16:50.463580  480149 out.go:179] * Done! kubectl is now configured to use "embed-certs-520529" cluster and "default" namespace by default
	I1124 04:16:50.786664  484296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.085679491s)
	I1124 04:16:50.786718  484296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.945514805s)
	I1124 04:16:50.786969  484296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.61810632s)
	I1124 04:16:50.787144  484296 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.589867131s)
	I1124 04:16:50.787168  484296 api_server.go:72] duration metric: took 7.507578802s to wait for apiserver process to appear ...
	I1124 04:16:50.787175  484296 api_server.go:88] waiting for apiserver healthz status ...
	I1124 04:16:50.787191  484296 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 04:16:50.790548  484296 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-600301 addons enable metrics-server
	
	I1124 04:16:50.803147  484296 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 04:16:50.804699  484296 api_server.go:141] control plane version: v1.34.1
	I1124 04:16:50.804728  484296 api_server.go:131] duration metric: took 17.546766ms to wait for apiserver health ...
	I1124 04:16:50.804738  484296 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 04:16:50.826684  484296 system_pods.go:59] 8 kube-system pods found
	I1124 04:16:50.826726  484296 system_pods.go:61] "coredns-66bc5c9577-x6vx6" [f760eed4-9015-4d00-a224-e417f52d2938] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:16:50.826736  484296 system_pods.go:61] "etcd-no-preload-600301" [b23fe25a-20ab-47d6-9771-1505a8aaf295] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 04:16:50.826742  484296 system_pods.go:61] "kindnet-rqpt9" [a7f1c5ad-1407-46d8-9644-72a830d743e0] Running
	I1124 04:16:50.826748  484296 system_pods.go:61] "kube-apiserver-no-preload-600301" [1db2ceaf-3f52-4486-9474-99fbf501425d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 04:16:50.826754  484296 system_pods.go:61] "kube-controller-manager-no-preload-600301" [5687b2b0-9a55-4872-b7c3-81779518bc55] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 04:16:50.826759  484296 system_pods.go:61] "kube-proxy-bzg2j" [ff549722-c13c-46b4-8ba0-9c34338e030d] Running
	I1124 04:16:50.826765  484296 system_pods.go:61] "kube-scheduler-no-preload-600301" [53ceff81-cfd1-43e5-9754-15d48f6b34db] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 04:16:50.826771  484296 system_pods.go:61] "storage-provisioner" [a6a27bc4-a6cb-46f9-98ca-f1ae25373869] Running
	I1124 04:16:50.826784  484296 system_pods.go:74] duration metric: took 22.039057ms to wait for pod list to return data ...
	I1124 04:16:50.826792  484296 default_sa.go:34] waiting for default service account to be created ...
	I1124 04:16:50.837638  484296 default_sa.go:45] found service account: "default"
	I1124 04:16:50.837670  484296 default_sa.go:55] duration metric: took 10.867264ms for default service account to be created ...
	I1124 04:16:50.837682  484296 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 04:16:50.838946  484296 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1124 04:16:50.841888  484296 addons.go:530] duration metric: took 7.561858276s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1124 04:16:50.846163  484296 system_pods.go:86] 8 kube-system pods found
	I1124 04:16:50.846196  484296 system_pods.go:89] "coredns-66bc5c9577-x6vx6" [f760eed4-9015-4d00-a224-e417f52d2938] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:16:50.846206  484296 system_pods.go:89] "etcd-no-preload-600301" [b23fe25a-20ab-47d6-9771-1505a8aaf295] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 04:16:50.846211  484296 system_pods.go:89] "kindnet-rqpt9" [a7f1c5ad-1407-46d8-9644-72a830d743e0] Running
	I1124 04:16:50.846219  484296 system_pods.go:89] "kube-apiserver-no-preload-600301" [1db2ceaf-3f52-4486-9474-99fbf501425d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 04:16:50.846226  484296 system_pods.go:89] "kube-controller-manager-no-preload-600301" [5687b2b0-9a55-4872-b7c3-81779518bc55] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 04:16:50.846242  484296 system_pods.go:89] "kube-proxy-bzg2j" [ff549722-c13c-46b4-8ba0-9c34338e030d] Running
	I1124 04:16:50.846249  484296 system_pods.go:89] "kube-scheduler-no-preload-600301" [53ceff81-cfd1-43e5-9754-15d48f6b34db] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 04:16:50.846253  484296 system_pods.go:89] "storage-provisioner" [a6a27bc4-a6cb-46f9-98ca-f1ae25373869] Running
	I1124 04:16:50.846261  484296 system_pods.go:126] duration metric: took 8.573458ms to wait for k8s-apps to be running ...
	I1124 04:16:50.846269  484296 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 04:16:50.846326  484296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:16:50.867193  484296 system_svc.go:56] duration metric: took 20.912889ms WaitForService to wait for kubelet
	I1124 04:16:50.867225  484296 kubeadm.go:587] duration metric: took 7.58763471s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 04:16:50.867245  484296 node_conditions.go:102] verifying NodePressure condition ...
	I1124 04:16:50.871658  484296 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 04:16:50.871700  484296 node_conditions.go:123] node cpu capacity is 2
	I1124 04:16:50.871714  484296 node_conditions.go:105] duration metric: took 4.463696ms to run NodePressure ...
	I1124 04:16:50.871727  484296 start.go:242] waiting for startup goroutines ...
	I1124 04:16:50.871735  484296 start.go:247] waiting for cluster config update ...
	I1124 04:16:50.871745  484296 start.go:256] writing updated cluster config ...
	I1124 04:16:50.872019  484296 ssh_runner.go:195] Run: rm -f paused
	I1124 04:16:50.876985  484296 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 04:16:50.881359  484296 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x6vx6" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 04:16:52.888658  484296 pod_ready.go:104] pod "coredns-66bc5c9577-x6vx6" is not "Ready", error: <nil>
	W1124 04:16:55.389019  484296 pod_ready.go:104] pod "coredns-66bc5c9577-x6vx6" is not "Ready", error: <nil>
	W1124 04:16:57.886893  484296 pod_ready.go:104] pod "coredns-66bc5c9577-x6vx6" is not "Ready", error: <nil>
	W1124 04:16:59.888336  484296 pod_ready.go:104] pod "coredns-66bc5c9577-x6vx6" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 24 04:16:47 embed-certs-520529 crio[839]: time="2025-11-24T04:16:47.535189368Z" level=info msg="Created container b289307aa7d77c44dab68090efad212e866adf4aabc73198360b32959a341903: kube-system/coredns-66bc5c9577-bvwhr/coredns" id=47d0803a-92c6-4ea1-8f87-5af3cd6bea60 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:16:47 embed-certs-520529 crio[839]: time="2025-11-24T04:16:47.53612154Z" level=info msg="Starting container: b289307aa7d77c44dab68090efad212e866adf4aabc73198360b32959a341903" id=09896a54-4b28-420a-bf3c-67a078f5cb7a name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:16:47 embed-certs-520529 crio[839]: time="2025-11-24T04:16:47.537892688Z" level=info msg="Started container" PID=1731 containerID=b289307aa7d77c44dab68090efad212e866adf4aabc73198360b32959a341903 description=kube-system/coredns-66bc5c9577-bvwhr/coredns id=09896a54-4b28-420a-bf3c-67a078f5cb7a name=/runtime.v1.RuntimeService/StartContainer sandboxID=1d891aa4c72649085eadf1c694ba0361fc0a69d970c21edb8ef84bf71989a9d8
	Nov 24 04:16:51 embed-certs-520529 crio[839]: time="2025-11-24T04:16:51.126221533Z" level=info msg="Running pod sandbox: default/busybox/POD" id=bec8497d-114f-44f9-b797-9e3cb06d20a4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 04:16:51 embed-certs-520529 crio[839]: time="2025-11-24T04:16:51.126295601Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:16:51 embed-certs-520529 crio[839]: time="2025-11-24T04:16:51.13375314Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:13cec2e5419e851fef2130003e3b59c545f09838409c550c923ca133e57b2588 UID:29a8cb8a-6390-49d0-a8b7-1a3f51501ad7 NetNS:/var/run/netns/aeaf465f-07cc-452d-a12f-6c9fb66e0a36 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012aa48}] Aliases:map[]}"
	Nov 24 04:16:51 embed-certs-520529 crio[839]: time="2025-11-24T04:16:51.133789973Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 24 04:16:51 embed-certs-520529 crio[839]: time="2025-11-24T04:16:51.159493101Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:13cec2e5419e851fef2130003e3b59c545f09838409c550c923ca133e57b2588 UID:29a8cb8a-6390-49d0-a8b7-1a3f51501ad7 NetNS:/var/run/netns/aeaf465f-07cc-452d-a12f-6c9fb66e0a36 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012aa48}] Aliases:map[]}"
	Nov 24 04:16:51 embed-certs-520529 crio[839]: time="2025-11-24T04:16:51.159801429Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 24 04:16:51 embed-certs-520529 crio[839]: time="2025-11-24T04:16:51.166202257Z" level=info msg="Ran pod sandbox 13cec2e5419e851fef2130003e3b59c545f09838409c550c923ca133e57b2588 with infra container: default/busybox/POD" id=bec8497d-114f-44f9-b797-9e3cb06d20a4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 04:16:51 embed-certs-520529 crio[839]: time="2025-11-24T04:16:51.168632491Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=988225f1-30c9-4edd-a1b9-6ea16043abde name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:16:51 embed-certs-520529 crio[839]: time="2025-11-24T04:16:51.168928798Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=988225f1-30c9-4edd-a1b9-6ea16043abde name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:16:51 embed-certs-520529 crio[839]: time="2025-11-24T04:16:51.169045157Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=988225f1-30c9-4edd-a1b9-6ea16043abde name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:16:51 embed-certs-520529 crio[839]: time="2025-11-24T04:16:51.172474041Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dbb5d583-8997-412e-a9bc-00fd7003b68c name=/runtime.v1.ImageService/PullImage
	Nov 24 04:16:51 embed-certs-520529 crio[839]: time="2025-11-24T04:16:51.177215845Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 04:16:53 embed-certs-520529 crio[839]: time="2025-11-24T04:16:53.44533185Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=dbb5d583-8997-412e-a9bc-00fd7003b68c name=/runtime.v1.ImageService/PullImage
	Nov 24 04:16:53 embed-certs-520529 crio[839]: time="2025-11-24T04:16:53.446493957Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=feb55a52-0417-4f97-94f4-dd8d0c3d5d7e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:16:53 embed-certs-520529 crio[839]: time="2025-11-24T04:16:53.448282288Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5a66a737-f9df-4633-9a98-e8426de6d1e0 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:16:53 embed-certs-520529 crio[839]: time="2025-11-24T04:16:53.454612214Z" level=info msg="Creating container: default/busybox/busybox" id=27a554ce-c5c4-40bf-bf99-849b11f0862a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:16:53 embed-certs-520529 crio[839]: time="2025-11-24T04:16:53.454846925Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:16:53 embed-certs-520529 crio[839]: time="2025-11-24T04:16:53.459903284Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:16:53 embed-certs-520529 crio[839]: time="2025-11-24T04:16:53.460484682Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:16:53 embed-certs-520529 crio[839]: time="2025-11-24T04:16:53.476036428Z" level=info msg="Created container 506cacae11bf45a278177a1c490c0ffa309dd9c0992cd9b1750b1334dafcfd68: default/busybox/busybox" id=27a554ce-c5c4-40bf-bf99-849b11f0862a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:16:53 embed-certs-520529 crio[839]: time="2025-11-24T04:16:53.478574585Z" level=info msg="Starting container: 506cacae11bf45a278177a1c490c0ffa309dd9c0992cd9b1750b1334dafcfd68" id=cfe3d888-494a-4359-a936-c3f4b38752f7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:16:53 embed-certs-520529 crio[839]: time="2025-11-24T04:16:53.481422211Z" level=info msg="Started container" PID=1799 containerID=506cacae11bf45a278177a1c490c0ffa309dd9c0992cd9b1750b1334dafcfd68 description=default/busybox/busybox id=cfe3d888-494a-4359-a936-c3f4b38752f7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=13cec2e5419e851fef2130003e3b59c545f09838409c550c923ca133e57b2588
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	506cacae11bf4       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago        Running             busybox                   0                   13cec2e5419e8       busybox                                      default
	b289307aa7d77       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      15 seconds ago       Running             coredns                   0                   1d891aa4c7264       coredns-66bc5c9577-bvwhr                     kube-system
	567117a7fad63       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      15 seconds ago       Running             storage-provisioner       0                   631ec20eaca2b       storage-provisioner                          kube-system
	b1e4d0115efee       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   376bd9852f10a       kube-proxy-dt4th                             kube-system
	e10c84073213e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      56 seconds ago       Running             kindnet-cni               0                   f1656a94d521c       kindnet-tkncp                                kube-system
	9cbdab3d2df67       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   6a0d90408a781       kube-apiserver-embed-certs-520529            kube-system
	d4d56a6cabd03       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   520d00b3e30e0       etcd-embed-certs-520529                      kube-system
	6a95e6a0d41b9       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   bc64445028d75       kube-scheduler-embed-certs-520529            kube-system
	4bcce03b226d9       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   4447d96108b14       kube-controller-manager-embed-certs-520529   kube-system
	
	
	==> coredns [b289307aa7d77c44dab68090efad212e866adf4aabc73198360b32959a341903] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53979 - 57002 "HINFO IN 8959526887116296524.2110806711792291125. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.042287183s
	
	
	==> describe nodes <==
	Name:               embed-certs-520529
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-520529
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=embed-certs-520529
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T04_16_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 04:15:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-520529
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 04:17:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 04:17:02 +0000   Mon, 24 Nov 2025 04:15:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 04:17:02 +0000   Mon, 24 Nov 2025 04:15:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 04:17:02 +0000   Mon, 24 Nov 2025 04:15:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 04:17:02 +0000   Mon, 24 Nov 2025 04:16:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-520529
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 304a86241bf1bbb85bd31db5692386d7
	  System UUID:                cb05b9d1-526c-48cf-b8c9-27f04aa8373b
	  Boot ID:                    e6ca431c-3a35-478f-87f6-f49cc4bc8a65
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-bvwhr                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-embed-certs-520529                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-tkncp                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-embed-certs-520529             250m (12%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-embed-certs-520529    200m (10%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-proxy-dt4th                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-embed-certs-520529             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   NodeHasSufficientMemory  71s (x8 over 71s)  kubelet          Node embed-certs-520529 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    71s (x8 over 71s)  kubelet          Node embed-certs-520529 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     71s (x8 over 71s)  kubelet          Node embed-certs-520529 status is now: NodeHasSufficientPID
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node embed-certs-520529 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node embed-certs-520529 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node embed-certs-520529 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node embed-certs-520529 event: Registered Node embed-certs-520529 in Controller
	  Normal   NodeReady                16s                kubelet          Node embed-certs-520529 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 03:52] overlayfs: idmapped layers are currently not supported
	[Nov24 03:54] overlayfs: idmapped layers are currently not supported
	[Nov24 03:55] overlayfs: idmapped layers are currently not supported
	[Nov24 03:56] overlayfs: idmapped layers are currently not supported
	[Nov24 03:57] overlayfs: idmapped layers are currently not supported
	[  +3.077077] overlayfs: idmapped layers are currently not supported
	[Nov24 03:58] overlayfs: idmapped layers are currently not supported
	[ +17.825126] overlayfs: idmapped layers are currently not supported
	[Nov24 03:59] overlayfs: idmapped layers are currently not supported
	[ +28.164324] overlayfs: idmapped layers are currently not supported
	[Nov24 04:01] overlayfs: idmapped layers are currently not supported
	[Nov24 04:02] overlayfs: idmapped layers are currently not supported
	[Nov24 04:04] overlayfs: idmapped layers are currently not supported
	[Nov24 04:05] overlayfs: idmapped layers are currently not supported
	[Nov24 04:06] overlayfs: idmapped layers are currently not supported
	[Nov24 04:08] overlayfs: idmapped layers are currently not supported
	[Nov24 04:10] overlayfs: idmapped layers are currently not supported
	[Nov24 04:11] overlayfs: idmapped layers are currently not supported
	[ +23.918932] overlayfs: idmapped layers are currently not supported
	[Nov24 04:12] overlayfs: idmapped layers are currently not supported
	[ +35.202347] overlayfs: idmapped layers are currently not supported
	[Nov24 04:13] overlayfs: idmapped layers are currently not supported
	[Nov24 04:15] overlayfs: idmapped layers are currently not supported
	[ +47.476343] overlayfs: idmapped layers are currently not supported
	[Nov24 04:16] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d4d56a6cabd039d56c64e98e0c6a088b3aab738b87add8ca425e49be15bea718] <==
	{"level":"warn","ts":"2025-11-24T04:15:56.515997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:56.554420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:56.589805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:56.607172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:56.629365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:56.651599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:56.692120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:56.709295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:56.714172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:56.741970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:56.774102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:56.787646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:56.803448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:56.828090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:56.847821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:56.864619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:56.881858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:56.901813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:56.927770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:56.957321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:56.976384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:57.006664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:57.019360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:57.039382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:15:57.145961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59060","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 04:17:02 up  2:59,  0 user,  load average: 3.50, 3.26, 2.78
	Linux embed-certs-520529 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e10c84073213e9eeaec6cb2d1ec24c7385c892f2edcdf593c0e9a5837607e585] <==
	I1124 04:16:06.442165       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 04:16:06.442435       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 04:16:06.443799       1 main.go:148] setting mtu 1500 for CNI 
	I1124 04:16:06.443821       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 04:16:06.443833       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T04:16:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 04:16:06.621414       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 04:16:06.621433       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 04:16:06.621443       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 04:16:06.621730       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 04:16:36.620872       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 04:16:36.621830       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 04:16:36.621853       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 04:16:36.628329       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1124 04:16:38.121888       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 04:16:38.121994       1 metrics.go:72] Registering metrics
	I1124 04:16:38.122137       1 controller.go:711] "Syncing nftables rules"
	I1124 04:16:46.622549       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 04:16:46.622606       1 main.go:301] handling current node
	I1124 04:16:56.620547       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 04:16:56.620604       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9cbdab3d2df67bcd5e2194d8578f88745a8b17b8f2a68dfb3142e547928e27b3] <==
	E1124 04:15:58.212232       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	E1124 04:15:58.212400       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1124 04:15:58.259963       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 04:15:58.274188       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 04:15:58.276098       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 04:15:58.286801       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 04:15:58.287061       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 04:15:58.428830       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 04:15:58.868502       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 04:15:58.876365       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 04:15:58.876388       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 04:15:59.680840       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 04:15:59.736544       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 04:15:59.876596       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 04:15:59.884674       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1124 04:15:59.885976       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 04:15:59.892026       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 04:16:00.168802       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 04:16:01.091618       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 04:16:01.152429       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 04:16:01.172329       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 04:16:05.527392       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 04:16:05.536155       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 04:16:05.774754       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 04:16:06.160382       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [4bcce03b226d98affc33e643c3db5d271643474a4fa9acbb785517fb83f25f83] <==
	I1124 04:16:05.099855       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 04:16:05.099905       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 04:16:05.113313       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 04:16:05.113951       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 04:16:05.117528       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 04:16:05.117711       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 04:16:05.118141       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 04:16:05.118319       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 04:16:05.121113       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 04:16:05.122346       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 04:16:05.122894       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 04:16:05.121245       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 04:16:05.123302       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 04:16:05.123352       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 04:16:05.127725       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 04:16:05.121416       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 04:16:05.121440       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 04:16:05.130477       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 04:16:05.137919       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 04:16:05.138114       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 04:16:05.138196       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 04:16:05.138247       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 04:16:05.138275       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 04:16:05.183593       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-520529" podCIDRs=["10.244.0.0/24"]
	I1124 04:16:50.073777       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [b1e4d0115efeebcba19da148db77a7f5922e88f76a9e086580654094b425f2db] <==
	I1124 04:16:07.739755       1 server_linux.go:53] "Using iptables proxy"
	I1124 04:16:07.816078       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 04:16:07.916605       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 04:16:07.916712       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 04:16:07.916817       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 04:16:07.936853       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 04:16:07.936909       1 server_linux.go:132] "Using iptables Proxier"
	I1124 04:16:07.944063       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 04:16:07.944382       1 server.go:527] "Version info" version="v1.34.1"
	I1124 04:16:07.944405       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:16:07.954401       1 config.go:200] "Starting service config controller"
	I1124 04:16:07.954427       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 04:16:07.954617       1 config.go:106] "Starting endpoint slice config controller"
	I1124 04:16:07.954630       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 04:16:07.955790       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 04:16:07.962955       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 04:16:07.962981       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 04:16:07.956323       1 config.go:309] "Starting node config controller"
	I1124 04:16:07.962996       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 04:16:07.963001       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 04:16:08.055358       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 04:16:08.056545       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6a95e6a0d41b975e42398ca2f9cf222a27a81d83de23ae4ce654fe4fe66c732f] <==
	I1124 04:15:58.648488       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 04:15:58.648538       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 04:15:58.649083       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 04:15:58.649194       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1124 04:15:58.650222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1124 04:15:58.661757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 04:15:58.661838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 04:15:58.661898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 04:15:58.661963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 04:15:58.662177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 04:15:58.662213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 04:15:58.662258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 04:15:58.662283       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 04:15:58.662324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 04:15:58.665484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 04:15:58.665561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 04:15:58.665632       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 04:15:58.665690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 04:15:58.665732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 04:15:58.665788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 04:15:58.665919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 04:15:58.665896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 04:15:58.666015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 04:15:59.568163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1124 04:16:01.449387       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 04:16:05 embed-certs-520529 kubelet[1301]: I1124 04:16:05.813616    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/eccdf0bd-3245-4547-aed3-65ae2e72ed82-cni-cfg\") pod \"kindnet-tkncp\" (UID: \"eccdf0bd-3245-4547-aed3-65ae2e72ed82\") " pod="kube-system/kindnet-tkncp"
	Nov 24 04:16:05 embed-certs-520529 kubelet[1301]: I1124 04:16:05.813660    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj869\" (UniqueName: \"kubernetes.io/projected/eccdf0bd-3245-4547-aed3-65ae2e72ed82-kube-api-access-bj869\") pod \"kindnet-tkncp\" (UID: \"eccdf0bd-3245-4547-aed3-65ae2e72ed82\") " pod="kube-system/kindnet-tkncp"
	Nov 24 04:16:05 embed-certs-520529 kubelet[1301]: I1124 04:16:05.813683    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eccdf0bd-3245-4547-aed3-65ae2e72ed82-xtables-lock\") pod \"kindnet-tkncp\" (UID: \"eccdf0bd-3245-4547-aed3-65ae2e72ed82\") " pod="kube-system/kindnet-tkncp"
	Nov 24 04:16:05 embed-certs-520529 kubelet[1301]: I1124 04:16:05.813701    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eccdf0bd-3245-4547-aed3-65ae2e72ed82-lib-modules\") pod \"kindnet-tkncp\" (UID: \"eccdf0bd-3245-4547-aed3-65ae2e72ed82\") " pod="kube-system/kindnet-tkncp"
	Nov 24 04:16:05 embed-certs-520529 kubelet[1301]: I1124 04:16:05.813720    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47798ce5-c1f5-4f74-a933-76514aee25a3-xtables-lock\") pod \"kube-proxy-dt4th\" (UID: \"47798ce5-c1f5-4f74-a933-76514aee25a3\") " pod="kube-system/kube-proxy-dt4th"
	Nov 24 04:16:05 embed-certs-520529 kubelet[1301]: I1124 04:16:05.813738    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47798ce5-c1f5-4f74-a933-76514aee25a3-lib-modules\") pod \"kube-proxy-dt4th\" (UID: \"47798ce5-c1f5-4f74-a933-76514aee25a3\") " pod="kube-system/kube-proxy-dt4th"
	Nov 24 04:16:05 embed-certs-520529 kubelet[1301]: I1124 04:16:05.813756    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/47798ce5-c1f5-4f74-a933-76514aee25a3-kube-proxy\") pod \"kube-proxy-dt4th\" (UID: \"47798ce5-c1f5-4f74-a933-76514aee25a3\") " pod="kube-system/kube-proxy-dt4th"
	Nov 24 04:16:05 embed-certs-520529 kubelet[1301]: I1124 04:16:05.813772    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vcxz\" (UniqueName: \"kubernetes.io/projected/47798ce5-c1f5-4f74-a933-76514aee25a3-kube-api-access-2vcxz\") pod \"kube-proxy-dt4th\" (UID: \"47798ce5-c1f5-4f74-a933-76514aee25a3\") " pod="kube-system/kube-proxy-dt4th"
	Nov 24 04:16:05 embed-certs-520529 kubelet[1301]: I1124 04:16:05.933510    1301 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 24 04:16:06 embed-certs-520529 kubelet[1301]: W1124 04:16:06.136835    1301 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8a3eb121088a2884f162896a1fbffe937d27ff6bf1c385a43a0e9edc1839c5eb/crio-f1656a94d521c5a86c87ef7e915f1af12864540acaa60535bfb67f58f035cb78 WatchSource:0}: Error finding container f1656a94d521c5a86c87ef7e915f1af12864540acaa60535bfb67f58f035cb78: Status 404 returned error can't find the container with id f1656a94d521c5a86c87ef7e915f1af12864540acaa60535bfb67f58f035cb78
	Nov 24 04:16:06 embed-certs-520529 kubelet[1301]: E1124 04:16:06.914879    1301 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Nov 24 04:16:06 embed-certs-520529 kubelet[1301]: E1124 04:16:06.915003    1301 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/47798ce5-c1f5-4f74-a933-76514aee25a3-kube-proxy podName:47798ce5-c1f5-4f74-a933-76514aee25a3 nodeName:}" failed. No retries permitted until 2025-11-24 04:16:07.414977199 +0000 UTC m=+6.460539384 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/47798ce5-c1f5-4f74-a933-76514aee25a3-kube-proxy") pod "kube-proxy-dt4th" (UID: "47798ce5-c1f5-4f74-a933-76514aee25a3") : failed to sync configmap cache: timed out waiting for the condition
	Nov 24 04:16:07 embed-certs-520529 kubelet[1301]: W1124 04:16:07.637381    1301 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8a3eb121088a2884f162896a1fbffe937d27ff6bf1c385a43a0e9edc1839c5eb/crio-376bd9852f10a56b159c5fe66f064c438c108752efaa1889f3b48737a4de4322 WatchSource:0}: Error finding container 376bd9852f10a56b159c5fe66f064c438c108752efaa1889f3b48737a4de4322: Status 404 returned error can't find the container with id 376bd9852f10a56b159c5fe66f064c438c108752efaa1889f3b48737a4de4322
	Nov 24 04:16:08 embed-certs-520529 kubelet[1301]: I1124 04:16:08.323204    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-tkncp" podStartSLOduration=3.323184127 podStartE2EDuration="3.323184127s" podCreationTimestamp="2025-11-24 04:16:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 04:16:07.341740585 +0000 UTC m=+6.387302778" watchObservedRunningTime="2025-11-24 04:16:08.323184127 +0000 UTC m=+7.368746312"
	Nov 24 04:16:08 embed-certs-520529 kubelet[1301]: I1124 04:16:08.448282    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dt4th" podStartSLOduration=3.448263833 podStartE2EDuration="3.448263833s" podCreationTimestamp="2025-11-24 04:16:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 04:16:08.327767866 +0000 UTC m=+7.373330059" watchObservedRunningTime="2025-11-24 04:16:08.448263833 +0000 UTC m=+7.493826010"
	Nov 24 04:16:46 embed-certs-520529 kubelet[1301]: I1124 04:16:46.973270    1301 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 04:16:47 embed-certs-520529 kubelet[1301]: I1124 04:16:47.036231    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbp6n\" (UniqueName: \"kubernetes.io/projected/bad7a9be-48f5-443b-824e-859f9e21d194-kube-api-access-zbp6n\") pod \"storage-provisioner\" (UID: \"bad7a9be-48f5-443b-824e-859f9e21d194\") " pod="kube-system/storage-provisioner"
	Nov 24 04:16:47 embed-certs-520529 kubelet[1301]: I1124 04:16:47.036578    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bad7a9be-48f5-443b-824e-859f9e21d194-tmp\") pod \"storage-provisioner\" (UID: \"bad7a9be-48f5-443b-824e-859f9e21d194\") " pod="kube-system/storage-provisioner"
	Nov 24 04:16:47 embed-certs-520529 kubelet[1301]: I1124 04:16:47.136940    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n27dv\" (UniqueName: \"kubernetes.io/projected/afc820fb-a24a-4fb0-b2c9-8c5e2014a762-kube-api-access-n27dv\") pod \"coredns-66bc5c9577-bvwhr\" (UID: \"afc820fb-a24a-4fb0-b2c9-8c5e2014a762\") " pod="kube-system/coredns-66bc5c9577-bvwhr"
	Nov 24 04:16:47 embed-certs-520529 kubelet[1301]: I1124 04:16:47.137180    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afc820fb-a24a-4fb0-b2c9-8c5e2014a762-config-volume\") pod \"coredns-66bc5c9577-bvwhr\" (UID: \"afc820fb-a24a-4fb0-b2c9-8c5e2014a762\") " pod="kube-system/coredns-66bc5c9577-bvwhr"
	Nov 24 04:16:47 embed-certs-520529 kubelet[1301]: W1124 04:16:47.397029    1301 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8a3eb121088a2884f162896a1fbffe937d27ff6bf1c385a43a0e9edc1839c5eb/crio-631ec20eaca2bb412d1ec0069ca396430d6c82b5e4e19ae9f3a985c54a3fe0de WatchSource:0}: Error finding container 631ec20eaca2bb412d1ec0069ca396430d6c82b5e4e19ae9f3a985c54a3fe0de: Status 404 returned error can't find the container with id 631ec20eaca2bb412d1ec0069ca396430d6c82b5e4e19ae9f3a985c54a3fe0de
	Nov 24 04:16:48 embed-certs-520529 kubelet[1301]: I1124 04:16:48.468152    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.46813158 podStartE2EDuration="41.46813158s" podCreationTimestamp="2025-11-24 04:16:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 04:16:48.467748494 +0000 UTC m=+47.513310687" watchObservedRunningTime="2025-11-24 04:16:48.46813158 +0000 UTC m=+47.513693765"
	Nov 24 04:16:48 embed-certs-520529 kubelet[1301]: I1124 04:16:48.468255    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-bvwhr" podStartSLOduration=42.468249095 podStartE2EDuration="42.468249095s" podCreationTimestamp="2025-11-24 04:16:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 04:16:48.443172461 +0000 UTC m=+47.488734638" watchObservedRunningTime="2025-11-24 04:16:48.468249095 +0000 UTC m=+47.513811272"
	Nov 24 04:16:50 embed-certs-520529 kubelet[1301]: I1124 04:16:50.868943    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hbh7\" (UniqueName: \"kubernetes.io/projected/29a8cb8a-6390-49d0-a8b7-1a3f51501ad7-kube-api-access-8hbh7\") pod \"busybox\" (UID: \"29a8cb8a-6390-49d0-a8b7-1a3f51501ad7\") " pod="default/busybox"
	Nov 24 04:16:51 embed-certs-520529 kubelet[1301]: W1124 04:16:51.164849    1301 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8a3eb121088a2884f162896a1fbffe937d27ff6bf1c385a43a0e9edc1839c5eb/crio-13cec2e5419e851fef2130003e3b59c545f09838409c550c923ca133e57b2588 WatchSource:0}: Error finding container 13cec2e5419e851fef2130003e3b59c545f09838409c550c923ca133e57b2588: Status 404 returned error can't find the container with id 13cec2e5419e851fef2130003e3b59c545f09838409c550c923ca133e57b2588
	
	
	==> storage-provisioner [567117a7fad63119b2a7f7da430436328c75aaa9c7a57303283525897820fe30] <==
	I1124 04:16:47.583219       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 04:16:47.635967       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 04:16:47.636025       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 04:16:47.667413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:16:47.678583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 04:16:47.678759       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 04:16:47.678936       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-520529_7eddf184-271a-43fe-b40d-d39b329384ef!
	I1124 04:16:47.679897       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ee581ba2-d5b1-413b-ba36-b573eee08872", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-520529_7eddf184-271a-43fe-b40d-d39b329384ef became leader
	W1124 04:16:47.695780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:16:47.711547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 04:16:47.780007       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-520529_7eddf184-271a-43fe-b40d-d39b329384ef!
	W1124 04:16:49.715181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:16:49.722941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:16:51.726501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:16:51.731001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:16:53.733899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:16:53.741887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:16:55.744760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:16:55.754263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:16:57.757634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:16:57.762575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:16:59.765904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:16:59.770894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:17:01.774158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:17:01.794529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-520529 -n embed-certs-520529
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-520529 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (4.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-600301 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-600301 --alsologtostderr -v=1: exit status 80 (2.085638558s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-600301 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 04:17:40.486050  489425 out.go:360] Setting OutFile to fd 1 ...
	I1124 04:17:40.486213  489425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:17:40.486219  489425 out.go:374] Setting ErrFile to fd 2...
	I1124 04:17:40.486224  489425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:17:40.486512  489425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 04:17:40.486769  489425 out.go:368] Setting JSON to false
	I1124 04:17:40.486786  489425 mustload.go:66] Loading cluster: no-preload-600301
	I1124 04:17:40.487335  489425 config.go:182] Loaded profile config "no-preload-600301": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:17:40.487961  489425 cli_runner.go:164] Run: docker container inspect no-preload-600301 --format={{.State.Status}}
	I1124 04:17:40.510927  489425 host.go:66] Checking if "no-preload-600301" exists ...
	I1124 04:17:40.511294  489425 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:17:40.615976  489425 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-24 04:17:40.602034749 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:17:40.616612  489425 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763935228-21975/minikube-v1.37.0-1763935228-21975-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763935228-21975-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-600301 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 04:17:40.622265  489425 out.go:179] * Pausing node no-preload-600301 ... 
	I1124 04:17:40.625665  489425 host.go:66] Checking if "no-preload-600301" exists ...
	I1124 04:17:40.626042  489425 ssh_runner.go:195] Run: systemctl --version
	I1124 04:17:40.626095  489425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-600301
	I1124 04:17:40.652763  489425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/no-preload-600301/id_rsa Username:docker}
	I1124 04:17:40.773380  489425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:17:40.791427  489425 pause.go:52] kubelet running: true
	I1124 04:17:40.791512  489425 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 04:17:41.102230  489425 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 04:17:41.102393  489425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 04:17:41.201764  489425 cri.go:89] found id: "77afd0d3ea194d2cc191291986d2a24265aa9a172c1bfb35d2a19cd74ae1b0b1"
	I1124 04:17:41.201834  489425 cri.go:89] found id: "bad663499e3d7ea06fa9dd003e9f02d75a1bc8b3d6e129e0346e94af65d0f20f"
	I1124 04:17:41.201858  489425 cri.go:89] found id: "03cb1763245b01261c41affe67d5e6fa8faaad06c4e673e16d63aff37e96298d"
	I1124 04:17:41.201880  489425 cri.go:89] found id: "ccc8adf4a0cd356a92cfcf643a0bb1acfc023f9b49443edf4d15961bd8be64fa"
	I1124 04:17:41.201918  489425 cri.go:89] found id: "bf628534f2a9c982b95075e67d7f92874661a0aeeb8a0f8d1a25a2b637198bcb"
	I1124 04:17:41.201943  489425 cri.go:89] found id: "8f632cb5a12f7dae88e3c60421ff0ab241f680a7b63c4f65bb8eb84499a64e5b"
	I1124 04:17:41.201965  489425 cri.go:89] found id: "ed75a78a04580a5d6c612702f42fb21257b2917a5a53cb5bcaa4a18f5382a8d9"
	I1124 04:17:41.202003  489425 cri.go:89] found id: "4ac88f9f47aab0b24c518b68c22f81e6afea8260839ddedaef751e38026bf9d2"
	I1124 04:17:41.202024  489425 cri.go:89] found id: "55afed455b10e0b92f497f4cc207d5f38895ca7082a005ab16f9c05679590e1b"
	I1124 04:17:41.202048  489425 cri.go:89] found id: "7d2caa78e54e80be1a8a82757ad06b9829a7067b9809a00be95c8bb15d19514b"
	I1124 04:17:41.202082  489425 cri.go:89] found id: "4a692b478ef7ae43e58f4c41e564623fcf774e3807a46373ba7ce091dea7cfdc"
	I1124 04:17:41.202102  489425 cri.go:89] found id: ""
	I1124 04:17:41.202181  489425 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 04:17:41.213688  489425 retry.go:31] will retry after 172.458734ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:17:41Z" level=error msg="open /run/runc: no such file or directory"
	I1124 04:17:41.387167  489425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:17:41.407591  489425 pause.go:52] kubelet running: false
	I1124 04:17:41.407751  489425 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 04:17:41.665846  489425 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 04:17:41.666002  489425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 04:17:41.806262  489425 cri.go:89] found id: "77afd0d3ea194d2cc191291986d2a24265aa9a172c1bfb35d2a19cd74ae1b0b1"
	I1124 04:17:41.806335  489425 cri.go:89] found id: "bad663499e3d7ea06fa9dd003e9f02d75a1bc8b3d6e129e0346e94af65d0f20f"
	I1124 04:17:41.806368  489425 cri.go:89] found id: "03cb1763245b01261c41affe67d5e6fa8faaad06c4e673e16d63aff37e96298d"
	I1124 04:17:41.806387  489425 cri.go:89] found id: "ccc8adf4a0cd356a92cfcf643a0bb1acfc023f9b49443edf4d15961bd8be64fa"
	I1124 04:17:41.806421  489425 cri.go:89] found id: "bf628534f2a9c982b95075e67d7f92874661a0aeeb8a0f8d1a25a2b637198bcb"
	I1124 04:17:41.806446  489425 cri.go:89] found id: "8f632cb5a12f7dae88e3c60421ff0ab241f680a7b63c4f65bb8eb84499a64e5b"
	I1124 04:17:41.806501  489425 cri.go:89] found id: "ed75a78a04580a5d6c612702f42fb21257b2917a5a53cb5bcaa4a18f5382a8d9"
	I1124 04:17:41.806527  489425 cri.go:89] found id: "4ac88f9f47aab0b24c518b68c22f81e6afea8260839ddedaef751e38026bf9d2"
	I1124 04:17:41.806551  489425 cri.go:89] found id: "55afed455b10e0b92f497f4cc207d5f38895ca7082a005ab16f9c05679590e1b"
	I1124 04:17:41.806589  489425 cri.go:89] found id: "7d2caa78e54e80be1a8a82757ad06b9829a7067b9809a00be95c8bb15d19514b"
	I1124 04:17:41.806612  489425 cri.go:89] found id: "4a692b478ef7ae43e58f4c41e564623fcf774e3807a46373ba7ce091dea7cfdc"
	I1124 04:17:41.806632  489425 cri.go:89] found id: ""
	I1124 04:17:41.806716  489425 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 04:17:41.823881  489425 retry.go:31] will retry after 240.905461ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:17:41Z" level=error msg="open /run/runc: no such file or directory"
	I1124 04:17:42.065453  489425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:17:42.082165  489425 pause.go:52] kubelet running: false
	I1124 04:17:42.082289  489425 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 04:17:42.371750  489425 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 04:17:42.371867  489425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 04:17:42.463031  489425 cri.go:89] found id: "77afd0d3ea194d2cc191291986d2a24265aa9a172c1bfb35d2a19cd74ae1b0b1"
	I1124 04:17:42.463057  489425 cri.go:89] found id: "bad663499e3d7ea06fa9dd003e9f02d75a1bc8b3d6e129e0346e94af65d0f20f"
	I1124 04:17:42.463062  489425 cri.go:89] found id: "03cb1763245b01261c41affe67d5e6fa8faaad06c4e673e16d63aff37e96298d"
	I1124 04:17:42.463067  489425 cri.go:89] found id: "ccc8adf4a0cd356a92cfcf643a0bb1acfc023f9b49443edf4d15961bd8be64fa"
	I1124 04:17:42.463070  489425 cri.go:89] found id: "bf628534f2a9c982b95075e67d7f92874661a0aeeb8a0f8d1a25a2b637198bcb"
	I1124 04:17:42.463074  489425 cri.go:89] found id: "8f632cb5a12f7dae88e3c60421ff0ab241f680a7b63c4f65bb8eb84499a64e5b"
	I1124 04:17:42.463100  489425 cri.go:89] found id: "ed75a78a04580a5d6c612702f42fb21257b2917a5a53cb5bcaa4a18f5382a8d9"
	I1124 04:17:42.463112  489425 cri.go:89] found id: "4ac88f9f47aab0b24c518b68c22f81e6afea8260839ddedaef751e38026bf9d2"
	I1124 04:17:42.463116  489425 cri.go:89] found id: "55afed455b10e0b92f497f4cc207d5f38895ca7082a005ab16f9c05679590e1b"
	I1124 04:17:42.463123  489425 cri.go:89] found id: "7d2caa78e54e80be1a8a82757ad06b9829a7067b9809a00be95c8bb15d19514b"
	I1124 04:17:42.463126  489425 cri.go:89] found id: "4a692b478ef7ae43e58f4c41e564623fcf774e3807a46373ba7ce091dea7cfdc"
	I1124 04:17:42.463135  489425 cri.go:89] found id: ""
	I1124 04:17:42.463203  489425 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 04:17:42.481614  489425 out.go:203] 
	W1124 04:17:42.485229  489425 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:17:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:17:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 04:17:42.485260  489425 out.go:285] * 
	* 
	W1124 04:17:42.492477  489425 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 04:17:42.495699  489425 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-600301 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-600301
helpers_test.go:243: (dbg) docker inspect no-preload-600301:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49ddc9e82ab9d43934cea4430c4bc27f0a6202c57efb037930223131a0eb594c",
	        "Created": "2025-11-24T04:14:55.518156491Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 484424,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T04:16:35.574657188Z",
	            "FinishedAt": "2025-11-24T04:16:34.729941312Z"
	        },
	        "Image": "sha256:fbb44bc62521f331457dff002aaa5e1e27856f9e53853b3b3ee62969be454028",
	        "ResolvConfPath": "/var/lib/docker/containers/49ddc9e82ab9d43934cea4430c4bc27f0a6202c57efb037930223131a0eb594c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49ddc9e82ab9d43934cea4430c4bc27f0a6202c57efb037930223131a0eb594c/hostname",
	        "HostsPath": "/var/lib/docker/containers/49ddc9e82ab9d43934cea4430c4bc27f0a6202c57efb037930223131a0eb594c/hosts",
	        "LogPath": "/var/lib/docker/containers/49ddc9e82ab9d43934cea4430c4bc27f0a6202c57efb037930223131a0eb594c/49ddc9e82ab9d43934cea4430c4bc27f0a6202c57efb037930223131a0eb594c-json.log",
	        "Name": "/no-preload-600301",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-600301:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-600301",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "49ddc9e82ab9d43934cea4430c4bc27f0a6202c57efb037930223131a0eb594c",
	                "LowerDir": "/var/lib/docker/overlay2/eef5958de4b0cc15d3cf1c85d274e91ca573dec4105ed431ccc177b754c82fbb-init/diff:/var/lib/docker/overlay2/d0aa28be488ed1454a92ae6ce1d851cbc3e880fd8a322d63788b1835a346ec13/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eef5958de4b0cc15d3cf1c85d274e91ca573dec4105ed431ccc177b754c82fbb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eef5958de4b0cc15d3cf1c85d274e91ca573dec4105ed431ccc177b754c82fbb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eef5958de4b0cc15d3cf1c85d274e91ca573dec4105ed431ccc177b754c82fbb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-600301",
	                "Source": "/var/lib/docker/volumes/no-preload-600301/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-600301",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-600301",
	                "name.minikube.sigs.k8s.io": "no-preload-600301",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f957bbfbd5e9eab86207bf15019237662b97a752bbdb3f548bee9e85a6ee5033",
	            "SandboxKey": "/var/run/docker/netns/f957bbfbd5e9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-600301": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:df:f6:a5:2a:f3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ebf72ee754bee872530e47e2d8a7a6196e915259be85acc5eb692aa3f4588a34",
	                    "EndpointID": "73f0e4d92ed69df758b643738f8a7b48104661f5e692a29043181455db589222",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-600301",
	                        "49ddc9e82ab9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-600301 -n no-preload-600301
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-600301 -n no-preload-600301: exit status 2 (459.813799ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-600301 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-600301 logs -n 25: (1.634624007s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-967682 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-967682    │ jenkins │ v1.37.0 │ 24 Nov 25 04:12 UTC │ 24 Nov 25 04:12 UTC │
	│ delete  │ -p cert-options-967682                                                                                                                                                                                                                        │ cert-options-967682    │ jenkins │ v1.37.0 │ 24 Nov 25 04:12 UTC │ 24 Nov 25 04:12 UTC │
	│ start   │ -p old-k8s-version-762702 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-762702 │ jenkins │ v1.37.0 │ 24 Nov 25 04:12 UTC │ 24 Nov 25 04:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-762702 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-762702 │ jenkins │ v1.37.0 │ 24 Nov 25 04:13 UTC │                     │
	│ stop    │ -p old-k8s-version-762702 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-762702 │ jenkins │ v1.37.0 │ 24 Nov 25 04:13 UTC │ 24 Nov 25 04:13 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-762702 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-762702 │ jenkins │ v1.37.0 │ 24 Nov 25 04:13 UTC │ 24 Nov 25 04:13 UTC │
	│ start   │ -p old-k8s-version-762702 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-762702 │ jenkins │ v1.37.0 │ 24 Nov 25 04:13 UTC │ 24 Nov 25 04:14 UTC │
	│ image   │ old-k8s-version-762702 image list --format=json                                                                                                                                                                                               │ old-k8s-version-762702 │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:14 UTC │
	│ pause   │ -p old-k8s-version-762702 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-762702 │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │                     │
	│ delete  │ -p old-k8s-version-762702                                                                                                                                                                                                                     │ old-k8s-version-762702 │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:14 UTC │
	│ delete  │ -p old-k8s-version-762702                                                                                                                                                                                                                     │ old-k8s-version-762702 │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:14 UTC │
	│ start   │ -p no-preload-600301 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-600301      │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:16 UTC │
	│ start   │ -p cert-expiration-918798 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-918798 │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:15 UTC │
	│ delete  │ -p cert-expiration-918798                                                                                                                                                                                                                     │ cert-expiration-918798 │ jenkins │ v1.37.0 │ 24 Nov 25 04:15 UTC │ 24 Nov 25 04:15 UTC │
	│ start   │ -p embed-certs-520529 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-520529     │ jenkins │ v1.37.0 │ 24 Nov 25 04:15 UTC │ 24 Nov 25 04:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-600301 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-600301      │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │                     │
	│ stop    │ -p no-preload-600301 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-600301      │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │ 24 Nov 25 04:16 UTC │
	│ addons  │ enable dashboard -p no-preload-600301 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-600301      │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │ 24 Nov 25 04:16 UTC │
	│ start   │ -p no-preload-600301 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-600301      │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │ 24 Nov 25 04:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-520529 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-520529     │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │                     │
	│ stop    │ -p embed-certs-520529 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-520529     │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-520529 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-520529     │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ start   │ -p embed-certs-520529 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-520529     │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │                     │
	│ image   │ no-preload-600301 image list --format=json                                                                                                                                                                                                    │ no-preload-600301      │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ pause   │ -p no-preload-600301 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-600301      │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 04:17:16
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 04:17:16.821639  487285 out.go:360] Setting OutFile to fd 1 ...
	I1124 04:17:16.821754  487285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:17:16.821765  487285 out.go:374] Setting ErrFile to fd 2...
	I1124 04:17:16.821770  487285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:17:16.822022  487285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 04:17:16.822396  487285 out.go:368] Setting JSON to false
	I1124 04:17:16.823348  487285 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10766,"bootTime":1763947071,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 04:17:16.823418  487285 start.go:143] virtualization:  
	I1124 04:17:16.827309  487285 out.go:179] * [embed-certs-520529] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 04:17:16.831228  487285 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 04:17:16.831347  487285 notify.go:221] Checking for updates...
	I1124 04:17:16.837485  487285 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 04:17:16.839709  487285 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:17:16.842653  487285 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	I1124 04:17:16.845494  487285 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 04:17:16.848503  487285 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 04:17:16.851901  487285 config.go:182] Loaded profile config "embed-certs-520529": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:17:16.852436  487285 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 04:17:16.875306  487285 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 04:17:16.875424  487285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:17:16.940510  487285 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 04:17:16.931395912 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:17:16.940614  487285 docker.go:319] overlay module found
	I1124 04:17:16.945661  487285 out.go:179] * Using the docker driver based on existing profile
	I1124 04:17:16.948405  487285 start.go:309] selected driver: docker
	I1124 04:17:16.948428  487285 start.go:927] validating driver "docker" against &{Name:embed-certs-520529 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-520529 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:17:16.948569  487285 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 04:17:16.949293  487285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:17:17.016148  487285 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 04:17:17.006397183 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:17:17.016501  487285 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 04:17:17.016536  487285 cni.go:84] Creating CNI manager for ""
	I1124 04:17:17.016599  487285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:17:17.016645  487285 start.go:353] cluster config:
	{Name:embed-certs-520529 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-520529 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:17:17.021816  487285 out.go:179] * Starting "embed-certs-520529" primary control-plane node in "embed-certs-520529" cluster
	I1124 04:17:17.024752  487285 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 04:17:17.027814  487285 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 04:17:17.030761  487285 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 04:17:17.030930  487285 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:17:17.030959  487285 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 04:17:17.030973  487285 cache.go:65] Caching tarball of preloaded images
	I1124 04:17:17.031052  487285 preload.go:238] Found /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 04:17:17.031068  487285 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 04:17:17.031179  487285 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/config.json ...
	I1124 04:17:17.051723  487285 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 04:17:17.051757  487285 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 04:17:17.051773  487285 cache.go:243] Successfully downloaded all kic artifacts
	I1124 04:17:17.051801  487285 start.go:360] acquireMachinesLock for embed-certs-520529: {Name:mk545d2cd105b23ef8983ff95cd892d06612a01e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 04:17:17.051865  487285 start.go:364] duration metric: took 38.072µs to acquireMachinesLock for "embed-certs-520529"
	I1124 04:17:17.051888  487285 start.go:96] Skipping create...Using existing machine configuration
	I1124 04:17:17.051893  487285 fix.go:54] fixHost starting: 
	I1124 04:17:17.052159  487285 cli_runner.go:164] Run: docker container inspect embed-certs-520529 --format={{.State.Status}}
	I1124 04:17:17.068772  487285 fix.go:112] recreateIfNeeded on embed-certs-520529: state=Stopped err=<nil>
	W1124 04:17:17.068805  487285 fix.go:138] unexpected machine state, will restart: <nil>
	W1124 04:17:15.387433  484296 pod_ready.go:104] pod "coredns-66bc5c9577-x6vx6" is not "Ready", error: <nil>
	W1124 04:17:17.389029  484296 pod_ready.go:104] pod "coredns-66bc5c9577-x6vx6" is not "Ready", error: <nil>
	W1124 04:17:19.887228  484296 pod_ready.go:104] pod "coredns-66bc5c9577-x6vx6" is not "Ready", error: <nil>
	I1124 04:17:17.072018  487285 out.go:252] * Restarting existing docker container for "embed-certs-520529" ...
	I1124 04:17:17.072101  487285 cli_runner.go:164] Run: docker start embed-certs-520529
	I1124 04:17:17.314331  487285 cli_runner.go:164] Run: docker container inspect embed-certs-520529 --format={{.State.Status}}
	I1124 04:17:17.334500  487285 kic.go:430] container "embed-certs-520529" state is running.
	I1124 04:17:17.334883  487285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-520529
	I1124 04:17:17.358165  487285 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/config.json ...
	I1124 04:17:17.358414  487285 machine.go:94] provisionDockerMachine start ...
	I1124 04:17:17.358572  487285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:17:17.381123  487285 main.go:143] libmachine: Using SSH client type: native
	I1124 04:17:17.381453  487285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33446 <nil> <nil>}
	I1124 04:17:17.381462  487285 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 04:17:17.383710  487285 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 04:17:20.538110  487285 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-520529
	
	I1124 04:17:20.538134  487285 ubuntu.go:182] provisioning hostname "embed-certs-520529"
	I1124 04:17:20.538208  487285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:17:20.556338  487285 main.go:143] libmachine: Using SSH client type: native
	I1124 04:17:20.556653  487285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33446 <nil> <nil>}
	I1124 04:17:20.556672  487285 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-520529 && echo "embed-certs-520529" | sudo tee /etc/hostname
	I1124 04:17:20.715660  487285 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-520529
	
	I1124 04:17:20.715814  487285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:17:20.732891  487285 main.go:143] libmachine: Using SSH client type: native
	I1124 04:17:20.733206  487285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33446 <nil> <nil>}
	I1124 04:17:20.733224  487285 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-520529' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-520529/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-520529' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 04:17:20.882848  487285 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 04:17:20.882925  487285 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-289526/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-289526/.minikube}
	I1124 04:17:20.882971  487285 ubuntu.go:190] setting up certificates
	I1124 04:17:20.883003  487285 provision.go:84] configureAuth start
	I1124 04:17:20.883089  487285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-520529
	I1124 04:17:20.907407  487285 provision.go:143] copyHostCerts
	I1124 04:17:20.907477  487285 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem, removing ...
	I1124 04:17:20.907491  487285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem
	I1124 04:17:20.907568  487285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem (1675 bytes)
	I1124 04:17:20.907726  487285 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem, removing ...
	I1124 04:17:20.907732  487285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem
	I1124 04:17:20.907759  487285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem (1082 bytes)
	I1124 04:17:20.907817  487285 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem, removing ...
	I1124 04:17:20.907822  487285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem
	I1124 04:17:20.907845  487285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem (1123 bytes)
	I1124 04:17:20.907898  487285 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem org=jenkins.embed-certs-520529 san=[127.0.0.1 192.168.76.2 embed-certs-520529 localhost minikube]
	I1124 04:17:21.236461  487285 provision.go:177] copyRemoteCerts
	I1124 04:17:21.236568  487285 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 04:17:21.236646  487285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:17:21.254329  487285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/embed-certs-520529/id_rsa Username:docker}
	I1124 04:17:21.358563  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 04:17:21.377182  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 04:17:21.400325  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 04:17:21.418587  487285 provision.go:87] duration metric: took 535.559321ms to configureAuth
	I1124 04:17:21.418659  487285 ubuntu.go:206] setting minikube options for container-runtime
	I1124 04:17:21.418881  487285 config.go:182] Loaded profile config "embed-certs-520529": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:17:21.418986  487285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:17:21.436723  487285 main.go:143] libmachine: Using SSH client type: native
	I1124 04:17:21.437051  487285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33446 <nil> <nil>}
	I1124 04:17:21.437071  487285 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 04:17:21.848529  487285 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 04:17:21.848558  487285 machine.go:97] duration metric: took 4.490133168s to provisionDockerMachine
	I1124 04:17:21.848576  487285 start.go:293] postStartSetup for "embed-certs-520529" (driver="docker")
	I1124 04:17:21.848593  487285 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 04:17:21.848677  487285 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 04:17:21.848761  487285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:17:21.877873  487285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/embed-certs-520529/id_rsa Username:docker}
	I1124 04:17:21.995349  487285 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 04:17:21.999721  487285 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 04:17:21.999748  487285 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 04:17:21.999762  487285 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/addons for local assets ...
	I1124 04:17:21.999833  487285 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/files for local assets ...
	I1124 04:17:21.999920  487285 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem -> 2913892.pem in /etc/ssl/certs
	I1124 04:17:22.000057  487285 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 04:17:22.016049  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:17:22.037679  487285 start.go:296] duration metric: took 189.085294ms for postStartSetup
	I1124 04:17:22.037770  487285 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 04:17:22.037818  487285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:17:22.056186  487285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/embed-certs-520529/id_rsa Username:docker}
	I1124 04:17:22.159983  487285 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 04:17:22.165197  487285 fix.go:56] duration metric: took 5.113296211s for fixHost
	I1124 04:17:22.165235  487285 start.go:83] releasing machines lock for "embed-certs-520529", held for 5.113357816s
	I1124 04:17:22.165317  487285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-520529
	I1124 04:17:22.182843  487285 ssh_runner.go:195] Run: cat /version.json
	I1124 04:17:22.182893  487285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:17:22.182906  487285 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 04:17:22.182969  487285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:17:22.203017  487285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/embed-certs-520529/id_rsa Username:docker}
	I1124 04:17:22.208049  487285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/embed-certs-520529/id_rsa Username:docker}
	I1124 04:17:22.389450  487285 ssh_runner.go:195] Run: systemctl --version
	I1124 04:17:22.397826  487285 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 04:17:22.440314  487285 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 04:17:22.445340  487285 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 04:17:22.445430  487285 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 04:17:22.453778  487285 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 04:17:22.453805  487285 start.go:496] detecting cgroup driver to use...
	I1124 04:17:22.453839  487285 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 04:17:22.453887  487285 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 04:17:22.471850  487285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 04:17:22.485073  487285 docker.go:218] disabling cri-docker service (if available) ...
	I1124 04:17:22.485137  487285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 04:17:22.500482  487285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 04:17:22.516644  487285 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 04:17:22.661370  487285 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 04:17:22.787772  487285 docker.go:234] disabling docker service ...
	I1124 04:17:22.787884  487285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 04:17:22.804734  487285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 04:17:22.819858  487285 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 04:17:22.945794  487285 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 04:17:23.071863  487285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 04:17:23.086611  487285 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 04:17:23.100975  487285 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 04:17:23.101083  487285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:17:23.110326  487285 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 04:17:23.110446  487285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:17:23.119651  487285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:17:23.128992  487285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:17:23.137970  487285 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 04:17:23.146821  487285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:17:23.155694  487285 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:17:23.164493  487285 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:17:23.173768  487285 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 04:17:23.181483  487285 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 04:17:23.188957  487285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:17:23.298223  487285 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 04:17:23.474289  487285 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 04:17:23.474357  487285 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 04:17:23.478134  487285 start.go:564] Will wait 60s for crictl version
	I1124 04:17:23.478196  487285 ssh_runner.go:195] Run: which crictl
	I1124 04:17:23.481684  487285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 04:17:23.510655  487285 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 04:17:23.510757  487285 ssh_runner.go:195] Run: crio --version
	I1124 04:17:23.545279  487285 ssh_runner.go:195] Run: crio --version
	I1124 04:17:23.578208  487285 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 04:17:23.581037  487285 cli_runner.go:164] Run: docker network inspect embed-certs-520529 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 04:17:23.596941  487285 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 04:17:23.600821  487285 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:17:23.610297  487285 kubeadm.go:884] updating cluster {Name:embed-certs-520529 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-520529 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 04:17:23.610432  487285 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:17:23.610541  487285 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:17:23.646693  487285 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:17:23.646721  487285 crio.go:433] Images already preloaded, skipping extraction
	I1124 04:17:23.646782  487285 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:17:23.677325  487285 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:17:23.677350  487285 cache_images.go:86] Images are preloaded, skipping loading
	I1124 04:17:23.677359  487285 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1124 04:17:23.677489  487285 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-520529 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-520529 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 04:17:23.677582  487285 ssh_runner.go:195] Run: crio config
	I1124 04:17:23.742616  487285 cni.go:84] Creating CNI manager for ""
	I1124 04:17:23.742639  487285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:17:23.742663  487285 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 04:17:23.742880  487285 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-520529 NodeName:embed-certs-520529 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 04:17:23.743050  487285 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-520529"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 04:17:23.743130  487285 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 04:17:23.753983  487285 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 04:17:23.754104  487285 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 04:17:23.762893  487285 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1124 04:17:23.775306  487285 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 04:17:23.788941  487285 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1124 04:17:23.802501  487285 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 04:17:23.806192  487285 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:17:23.817709  487285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:17:23.944632  487285 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:17:23.960961  487285 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529 for IP: 192.168.76.2
	I1124 04:17:23.961029  487285 certs.go:195] generating shared ca certs ...
	I1124 04:17:23.961073  487285 certs.go:227] acquiring lock for ca certs: {Name:mk13d1a72b5c7901cf99d88e558f62c6b8512807 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:17:23.961269  487285 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key
	I1124 04:17:23.961358  487285 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key
	I1124 04:17:23.961382  487285 certs.go:257] generating profile certs ...
	I1124 04:17:23.961519  487285 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/client.key
	I1124 04:17:23.961640  487285 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/apiserver.key.be55c4bc
	I1124 04:17:23.961729  487285 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/proxy-client.key
	I1124 04:17:23.961882  487285 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem (1338 bytes)
	W1124 04:17:23.961953  487285 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389_empty.pem, impossibly tiny 0 bytes
	I1124 04:17:23.961981  487285 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 04:17:23.962051  487285 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem (1082 bytes)
	I1124 04:17:23.962107  487285 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem (1123 bytes)
	I1124 04:17:23.962171  487285 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem (1675 bytes)
	I1124 04:17:23.962259  487285 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:17:23.963107  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 04:17:23.988815  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 04:17:24.010159  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 04:17:24.030894  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 04:17:24.050281  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 04:17:24.070928  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 04:17:24.095322  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 04:17:24.119612  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 04:17:24.138585  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 04:17:24.163315  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem --> /usr/share/ca-certificates/291389.pem (1338 bytes)
	I1124 04:17:24.181981  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /usr/share/ca-certificates/2913892.pem (1708 bytes)
	I1124 04:17:24.205736  487285 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 04:17:24.223519  487285 ssh_runner.go:195] Run: openssl version
	I1124 04:17:24.232069  487285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2913892.pem && ln -fs /usr/share/ca-certificates/2913892.pem /etc/ssl/certs/2913892.pem"
	I1124 04:17:24.243363  487285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2913892.pem
	I1124 04:17:24.248165  487285 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 03:20 /usr/share/ca-certificates/2913892.pem
	I1124 04:17:24.248305  487285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2913892.pem
	I1124 04:17:24.291519  487285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2913892.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 04:17:24.300440  487285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 04:17:24.309245  487285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:17:24.313299  487285 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 03:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:17:24.313400  487285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:17:24.358281  487285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 04:17:24.366397  487285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291389.pem && ln -fs /usr/share/ca-certificates/291389.pem /etc/ssl/certs/291389.pem"
	I1124 04:17:24.374656  487285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291389.pem
	I1124 04:17:24.378273  487285 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 03:20 /usr/share/ca-certificates/291389.pem
	I1124 04:17:24.378352  487285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291389.pem
	I1124 04:17:24.420539  487285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291389.pem /etc/ssl/certs/51391683.0"
	I1124 04:17:24.428597  487285 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 04:17:24.432523  487285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 04:17:24.473749  487285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 04:17:24.516402  487285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 04:17:24.557902  487285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 04:17:24.601469  487285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 04:17:24.643617  487285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 04:17:24.689140  487285 kubeadm.go:401] StartCluster: {Name:embed-certs-520529 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-520529 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:17:24.689296  487285 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 04:17:24.689399  487285 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 04:17:24.742905  487285 cri.go:89] found id: ""
	I1124 04:17:24.743040  487285 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 04:17:24.755200  487285 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 04:17:24.755258  487285 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 04:17:24.755375  487285 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 04:17:24.767918  487285 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 04:17:24.768607  487285 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-520529" does not appear in /home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:17:24.768938  487285 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-289526/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-520529" cluster setting kubeconfig missing "embed-certs-520529" context setting]
	I1124 04:17:24.769442  487285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/kubeconfig: {Name:mkb927d54c1c5489b1562496c31c4a11f46f8c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:17:24.771280  487285 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 04:17:24.783337  487285 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1124 04:17:24.783449  487285 kubeadm.go:602] duration metric: took 28.135409ms to restartPrimaryControlPlane
	I1124 04:17:24.783478  487285 kubeadm.go:403] duration metric: took 94.348883ms to StartCluster
	I1124 04:17:24.783508  487285 settings.go:142] acquiring lock: {Name:mkbb4fe81234aed150705ba63a85f83232f71688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:17:24.783617  487285 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:17:24.785019  487285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/kubeconfig: {Name:mkb927d54c1c5489b1562496c31c4a11f46f8c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:17:24.785449  487285 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 04:17:24.785828  487285 config.go:182] Loaded profile config "embed-certs-520529": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:17:24.785911  487285 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 04:17:24.786068  487285 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-520529"
	I1124 04:17:24.786114  487285 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-520529"
	W1124 04:17:24.786135  487285 addons.go:248] addon storage-provisioner should already be in state true
	I1124 04:17:24.786189  487285 host.go:66] Checking if "embed-certs-520529" exists ...
	I1124 04:17:24.786804  487285 cli_runner.go:164] Run: docker container inspect embed-certs-520529 --format={{.State.Status}}
	I1124 04:17:24.787006  487285 addons.go:70] Setting dashboard=true in profile "embed-certs-520529"
	I1124 04:17:24.787043  487285 addons.go:239] Setting addon dashboard=true in "embed-certs-520529"
	W1124 04:17:24.787083  487285 addons.go:248] addon dashboard should already be in state true
	I1124 04:17:24.787124  487285 host.go:66] Checking if "embed-certs-520529" exists ...
	I1124 04:17:24.787338  487285 addons.go:70] Setting default-storageclass=true in profile "embed-certs-520529"
	I1124 04:17:24.787374  487285 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-520529"
	I1124 04:17:24.787668  487285 cli_runner.go:164] Run: docker container inspect embed-certs-520529 --format={{.State.Status}}
	I1124 04:17:24.787670  487285 cli_runner.go:164] Run: docker container inspect embed-certs-520529 --format={{.State.Status}}
	I1124 04:17:24.795655  487285 out.go:179] * Verifying Kubernetes components...
	I1124 04:17:24.803130  487285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:17:24.821508  487285 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 04:17:24.829902  487285 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:17:24.829937  487285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 04:17:24.830003  487285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:17:24.858430  487285 addons.go:239] Setting addon default-storageclass=true in "embed-certs-520529"
	W1124 04:17:24.858539  487285 addons.go:248] addon default-storageclass should already be in state true
	I1124 04:17:24.858568  487285 host.go:66] Checking if "embed-certs-520529" exists ...
	I1124 04:17:24.870364  487285 cli_runner.go:164] Run: docker container inspect embed-certs-520529 --format={{.State.Status}}
	I1124 04:17:24.876994  487285 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 04:17:24.883433  487285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/embed-certs-520529/id_rsa Username:docker}
	I1124 04:17:24.891091  487285 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1124 04:17:21.887388  484296 pod_ready.go:104] pod "coredns-66bc5c9577-x6vx6" is not "Ready", error: <nil>
	W1124 04:17:23.887589  484296 pod_ready.go:104] pod "coredns-66bc5c9577-x6vx6" is not "Ready", error: <nil>
	I1124 04:17:24.894877  487285 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 04:17:24.894900  487285 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 04:17:24.894964  487285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:17:24.913230  487285 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 04:17:24.913251  487285 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 04:17:24.913318  487285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:17:24.950690  487285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/embed-certs-520529/id_rsa Username:docker}
	I1124 04:17:24.963617  487285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/embed-certs-520529/id_rsa Username:docker}
	I1124 04:17:25.157380  487285 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:17:25.169298  487285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:17:25.197726  487285 node_ready.go:35] waiting up to 6m0s for node "embed-certs-520529" to be "Ready" ...
	I1124 04:17:25.321041  487285 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 04:17:25.321067  487285 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 04:17:25.330312  487285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 04:17:25.401347  487285 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 04:17:25.401373  487285 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 04:17:25.476935  487285 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 04:17:25.476975  487285 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 04:17:25.548322  487285 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 04:17:25.548346  487285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 04:17:25.564682  487285 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 04:17:25.564707  487285 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 04:17:25.580240  487285 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 04:17:25.580265  487285 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 04:17:25.597243  487285 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 04:17:25.597270  487285 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 04:17:25.611947  487285 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 04:17:25.611972  487285 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 04:17:25.627263  487285 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 04:17:25.627294  487285 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 04:17:25.643335  487285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1124 04:17:26.386329  484296 pod_ready.go:104] pod "coredns-66bc5c9577-x6vx6" is not "Ready", error: <nil>
	I1124 04:17:26.887064  484296 pod_ready.go:94] pod "coredns-66bc5c9577-x6vx6" is "Ready"
	I1124 04:17:26.887088  484296 pod_ready.go:86] duration metric: took 36.005702344s for pod "coredns-66bc5c9577-x6vx6" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:17:26.890128  484296 pod_ready.go:83] waiting for pod "etcd-no-preload-600301" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:17:26.894905  484296 pod_ready.go:94] pod "etcd-no-preload-600301" is "Ready"
	I1124 04:17:26.894983  484296 pod_ready.go:86] duration metric: took 4.830181ms for pod "etcd-no-preload-600301" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:17:26.897468  484296 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-600301" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:17:26.904062  484296 pod_ready.go:94] pod "kube-apiserver-no-preload-600301" is "Ready"
	I1124 04:17:26.904136  484296 pod_ready.go:86] duration metric: took 6.59831ms for pod "kube-apiserver-no-preload-600301" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:17:26.906358  484296 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-600301" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:17:27.084739  484296 pod_ready.go:94] pod "kube-controller-manager-no-preload-600301" is "Ready"
	I1124 04:17:27.084764  484296 pod_ready.go:86] duration metric: took 178.336372ms for pod "kube-controller-manager-no-preload-600301" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:17:27.284601  484296 pod_ready.go:83] waiting for pod "kube-proxy-bzg2j" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:17:27.684529  484296 pod_ready.go:94] pod "kube-proxy-bzg2j" is "Ready"
	I1124 04:17:27.684554  484296 pod_ready.go:86] duration metric: took 399.929244ms for pod "kube-proxy-bzg2j" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:17:27.884663  484296 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-600301" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:17:28.284893  484296 pod_ready.go:94] pod "kube-scheduler-no-preload-600301" is "Ready"
	I1124 04:17:28.284970  484296 pod_ready.go:86] duration metric: took 400.230695ms for pod "kube-scheduler-no-preload-600301" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:17:28.285001  484296 pod_ready.go:40] duration metric: took 37.407980948s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 04:17:28.369480  484296 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 04:17:28.373709  484296 out.go:179] * Done! kubectl is now configured to use "no-preload-600301" cluster and "default" namespace by default
	I1124 04:17:29.589988  487285 node_ready.go:49] node "embed-certs-520529" is "Ready"
	I1124 04:17:29.590078  487285 node_ready.go:38] duration metric: took 4.392298505s for node "embed-certs-520529" to be "Ready" ...
	I1124 04:17:29.590109  487285 api_server.go:52] waiting for apiserver process to appear ...
	I1124 04:17:29.590197  487285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 04:17:31.046367  487285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.87703251s)
	I1124 04:17:31.046433  487285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.716096896s)
	I1124 04:17:31.099223  487285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.45580834s)
	I1124 04:17:31.099523  487285 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.509286148s)
	I1124 04:17:31.099588  487285 api_server.go:72] duration metric: took 6.314061602s to wait for apiserver process to appear ...
	I1124 04:17:31.099609  487285 api_server.go:88] waiting for apiserver healthz status ...
	I1124 04:17:31.099656  487285 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 04:17:31.102507  487285 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-520529 addons enable metrics-server
	
	I1124 04:17:31.105947  487285 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1124 04:17:31.108881  487285 addons.go:530] duration metric: took 6.3229639s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1124 04:17:31.119640  487285 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 04:17:31.119722  487285 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 04:17:31.600205  487285 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 04:17:31.609542  487285 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1124 04:17:31.610846  487285 api_server.go:141] control plane version: v1.34.1
	I1124 04:17:31.610904  487285 api_server.go:131] duration metric: took 511.262676ms to wait for apiserver health ...
	I1124 04:17:31.610942  487285 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 04:17:31.615224  487285 system_pods.go:59] 8 kube-system pods found
	I1124 04:17:31.615308  487285 system_pods.go:61] "coredns-66bc5c9577-bvwhr" [afc820fb-a24a-4fb0-b2c9-8c5e2014a762] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:17:31.615334  487285 system_pods.go:61] "etcd-embed-certs-520529" [f26ae428-2218-4bda-9b92-578d85c74df8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 04:17:31.615386  487285 system_pods.go:61] "kindnet-tkncp" [eccdf0bd-3245-4547-aed3-65ae2e72ed82] Running
	I1124 04:17:31.615414  487285 system_pods.go:61] "kube-apiserver-embed-certs-520529" [d25fe462-1c8b-467f-8c81-4610bd9173c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 04:17:31.615441  487285 system_pods.go:61] "kube-controller-manager-embed-certs-520529" [093b15ed-2629-4f07-aacb-21da8fe15032] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 04:17:31.615477  487285 system_pods.go:61] "kube-proxy-dt4th" [47798ce5-c1f5-4f74-a933-76514aee25a3] Running
	I1124 04:17:31.615501  487285 system_pods.go:61] "kube-scheduler-embed-certs-520529" [2a37b8ab-a5c8-45f3-9bc9-3e233a33c05d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 04:17:31.615538  487285 system_pods.go:61] "storage-provisioner" [bad7a9be-48f5-443b-824e-859f9e21d194] Running
	I1124 04:17:31.615562  487285 system_pods.go:74] duration metric: took 4.597957ms to wait for pod list to return data ...
	I1124 04:17:31.615583  487285 default_sa.go:34] waiting for default service account to be created ...
	I1124 04:17:31.618315  487285 default_sa.go:45] found service account: "default"
	I1124 04:17:31.618373  487285 default_sa.go:55] duration metric: took 2.756046ms for default service account to be created ...
	I1124 04:17:31.618411  487285 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 04:17:31.623799  487285 system_pods.go:86] 8 kube-system pods found
	I1124 04:17:31.623880  487285 system_pods.go:89] "coredns-66bc5c9577-bvwhr" [afc820fb-a24a-4fb0-b2c9-8c5e2014a762] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:17:31.623906  487285 system_pods.go:89] "etcd-embed-certs-520529" [f26ae428-2218-4bda-9b92-578d85c74df8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 04:17:31.623946  487285 system_pods.go:89] "kindnet-tkncp" [eccdf0bd-3245-4547-aed3-65ae2e72ed82] Running
	I1124 04:17:31.623977  487285 system_pods.go:89] "kube-apiserver-embed-certs-520529" [d25fe462-1c8b-467f-8c81-4610bd9173c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 04:17:31.624000  487285 system_pods.go:89] "kube-controller-manager-embed-certs-520529" [093b15ed-2629-4f07-aacb-21da8fe15032] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 04:17:31.624036  487285 system_pods.go:89] "kube-proxy-dt4th" [47798ce5-c1f5-4f74-a933-76514aee25a3] Running
	I1124 04:17:31.624065  487285 system_pods.go:89] "kube-scheduler-embed-certs-520529" [2a37b8ab-a5c8-45f3-9bc9-3e233a33c05d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 04:17:31.624088  487285 system_pods.go:89] "storage-provisioner" [bad7a9be-48f5-443b-824e-859f9e21d194] Running
	I1124 04:17:31.624125  487285 system_pods.go:126] duration metric: took 5.689572ms to wait for k8s-apps to be running ...
	I1124 04:17:31.624152  487285 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 04:17:31.624237  487285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:17:31.638826  487285 system_svc.go:56] duration metric: took 14.666965ms WaitForService to wait for kubelet
	I1124 04:17:31.638859  487285 kubeadm.go:587] duration metric: took 6.853353986s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 04:17:31.638879  487285 node_conditions.go:102] verifying NodePressure condition ...
	I1124 04:17:31.641421  487285 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 04:17:31.641456  487285 node_conditions.go:123] node cpu capacity is 2
	I1124 04:17:31.641469  487285 node_conditions.go:105] duration metric: took 2.583803ms to run NodePressure ...
	I1124 04:17:31.641482  487285 start.go:242] waiting for startup goroutines ...
	I1124 04:17:31.641490  487285 start.go:247] waiting for cluster config update ...
	I1124 04:17:31.641501  487285 start.go:256] writing updated cluster config ...
	I1124 04:17:31.641773  487285 ssh_runner.go:195] Run: rm -f paused
	I1124 04:17:31.646082  487285 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 04:17:31.650050  487285 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bvwhr" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 04:17:33.656353  487285 pod_ready.go:104] pod "coredns-66bc5c9577-bvwhr" is not "Ready", error: <nil>
	W1124 04:17:35.676719  487285 pod_ready.go:104] pod "coredns-66bc5c9577-bvwhr" is not "Ready", error: <nil>
	W1124 04:17:38.156551  487285 pod_ready.go:104] pod "coredns-66bc5c9577-bvwhr" is not "Ready", error: <nil>
	W1124 04:17:40.158712  487285 pod_ready.go:104] pod "coredns-66bc5c9577-bvwhr" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 24 04:17:30 no-preload-600301 crio[663]: time="2025-11-24T04:17:30.537044623Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:17:30 no-preload-600301 crio[663]: time="2025-11-24T04:17:30.546073192Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:17:30 no-preload-600301 crio[663]: time="2025-11-24T04:17:30.546107826Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:17:30 no-preload-600301 crio[663]: time="2025-11-24T04:17:30.546127945Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:17:30 no-preload-600301 crio[663]: time="2025-11-24T04:17:30.553459007Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:17:30 no-preload-600301 crio[663]: time="2025-11-24T04:17:30.553496924Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:17:30 no-preload-600301 crio[663]: time="2025-11-24T04:17:30.553518134Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:17:30 no-preload-600301 crio[663]: time="2025-11-24T04:17:30.562641785Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:17:30 no-preload-600301 crio[663]: time="2025-11-24T04:17:30.562800704Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:17:30 no-preload-600301 crio[663]: time="2025-11-24T04:17:30.562873492Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:17:30 no-preload-600301 crio[663]: time="2025-11-24T04:17:30.568090354Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:17:30 no-preload-600301 crio[663]: time="2025-11-24T04:17:30.568127351Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:17:37 no-preload-600301 crio[663]: time="2025-11-24T04:17:37.578965419Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9095e1e9-3765-4d00-9100-957e56ab2e15 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:17:37 no-preload-600301 crio[663]: time="2025-11-24T04:17:37.580603979Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=be2f6067-b3e7-4734-92c3-609f234aa7aa name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:17:37 no-preload-600301 crio[663]: time="2025-11-24T04:17:37.581610129Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wjlb7/dashboard-metrics-scraper" id=047166df-26df-43f9-ac73-17a482bb0780 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:17:37 no-preload-600301 crio[663]: time="2025-11-24T04:17:37.581705236Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:17:37 no-preload-600301 crio[663]: time="2025-11-24T04:17:37.589024605Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:17:37 no-preload-600301 crio[663]: time="2025-11-24T04:17:37.590029959Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:17:37 no-preload-600301 crio[663]: time="2025-11-24T04:17:37.61903504Z" level=info msg="Created container 7d2caa78e54e80be1a8a82757ad06b9829a7067b9809a00be95c8bb15d19514b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wjlb7/dashboard-metrics-scraper" id=047166df-26df-43f9-ac73-17a482bb0780 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:17:37 no-preload-600301 crio[663]: time="2025-11-24T04:17:37.61999945Z" level=info msg="Starting container: 7d2caa78e54e80be1a8a82757ad06b9829a7067b9809a00be95c8bb15d19514b" id=5879d2a9-bd73-47f3-a51f-bd692a2beda7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:17:37 no-preload-600301 crio[663]: time="2025-11-24T04:17:37.627792121Z" level=info msg="Started container" PID=1749 containerID=7d2caa78e54e80be1a8a82757ad06b9829a7067b9809a00be95c8bb15d19514b description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wjlb7/dashboard-metrics-scraper id=5879d2a9-bd73-47f3-a51f-bd692a2beda7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bca19331db37758574140ef0dec468f2e692158319896de6cc9794f9c57eee5c
	Nov 24 04:17:37 no-preload-600301 conmon[1747]: conmon 7d2caa78e54e80be1a8a <ninfo>: container 1749 exited with status 1
	Nov 24 04:17:37 no-preload-600301 crio[663]: time="2025-11-24T04:17:37.86578921Z" level=info msg="Removing container: 99f8016884ae746c911ab3bcb990ff667a9cdda2f3846eb0fadc7ab56286baf6" id=26e8774a-666e-48c7-94d3-e7425f63f00d name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 04:17:37 no-preload-600301 crio[663]: time="2025-11-24T04:17:37.878951615Z" level=info msg="Error loading conmon cgroup of container 99f8016884ae746c911ab3bcb990ff667a9cdda2f3846eb0fadc7ab56286baf6: cgroup deleted" id=26e8774a-666e-48c7-94d3-e7425f63f00d name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 04:17:37 no-preload-600301 crio[663]: time="2025-11-24T04:17:37.882416807Z" level=info msg="Removed container 99f8016884ae746c911ab3bcb990ff667a9cdda2f3846eb0fadc7ab56286baf6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wjlb7/dashboard-metrics-scraper" id=26e8774a-666e-48c7-94d3-e7425f63f00d name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	7d2caa78e54e8       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           6 seconds ago        Exited              dashboard-metrics-scraper   3                   bca19331db377       dashboard-metrics-scraper-6ffb444bf9-wjlb7   kubernetes-dashboard
	77afd0d3ea194       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           23 seconds ago       Running             storage-provisioner         2                   86e889e3e43ec       storage-provisioner                          kube-system
	4a692b478ef7a       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   38 seconds ago       Running             kubernetes-dashboard        0                   d367f2a5d62cb       kubernetes-dashboard-855c9754f9-q7r6n        kubernetes-dashboard
	bad663499e3d7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago       Running             coredns                     1                   5d827da865141       coredns-66bc5c9577-x6vx6                     kube-system
	32d3fa09f2669       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago       Running             busybox                     1                   b23059496ca5c       busybox                                      default
	03cb1763245b0       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           53 seconds ago       Exited              storage-provisioner         1                   86e889e3e43ec       storage-provisioner                          kube-system
	ccc8adf4a0cd3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago       Running             kindnet-cni                 1                   90c24029711fe       kindnet-rqpt9                                kube-system
	bf628534f2a9c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           53 seconds ago       Running             kube-proxy                  1                   d38a42ae5d9b4       kube-proxy-bzg2j                             kube-system
	8f632cb5a12f7       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   50d1155acd7ae       kube-controller-manager-no-preload-600301    kube-system
	ed75a78a04580       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   3e506b8589361       kube-scheduler-no-preload-600301             kube-system
	4ac88f9f47aab       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   90eb1665032bd       etcd-no-preload-600301                       kube-system
	55afed455b10e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   9d766c8f32e63       kube-apiserver-no-preload-600301             kube-system
	
	
	==> coredns [bad663499e3d7ea06fa9dd003e9f02d75a1bc8b3d6e129e0346e94af65d0f20f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:49033 - 35610 "HINFO IN 6166004667448281061.8125994888046049801. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016784171s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-600301
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-600301
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=no-preload-600301
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T04_15_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 04:15:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-600301
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 04:17:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 04:17:19 +0000   Mon, 24 Nov 2025 04:15:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 04:17:19 +0000   Mon, 24 Nov 2025 04:15:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 04:17:19 +0000   Mon, 24 Nov 2025 04:15:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 04:17:19 +0000   Mon, 24 Nov 2025 04:16:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-600301
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 304a86241bf1bbb85bd31db5692386d7
	  System UUID:                d1ffab9e-c111-4d9d-8ac8-cb5bfd0ed15c
	  Boot ID:                    e6ca431c-3a35-478f-87f6-f49cc4bc8a65
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-x6vx6                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     114s
	  kube-system                 etcd-no-preload-600301                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m1s
	  kube-system                 kindnet-rqpt9                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-no-preload-600301              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-no-preload-600301     200m (10%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-bzg2j                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-no-preload-600301              100m (5%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wjlb7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-q7r6n         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 112s                 kube-proxy       
	  Normal   Starting                 53s                  kube-proxy       
	  Warning  CgroupV1                 2m9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m9s (x8 over 2m9s)  kubelet          Node no-preload-600301 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node no-preload-600301 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m9s (x8 over 2m9s)  kubelet          Node no-preload-600301 status is now: NodeHasSufficientPID
	  Normal   Starting                 119s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  118s                 kubelet          Node no-preload-600301 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    118s                 kubelet          Node no-preload-600301 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     118s                 kubelet          Node no-preload-600301 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 118s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           115s                 node-controller  Node no-preload-600301 event: Registered Node no-preload-600301 in Controller
	  Normal   NodeReady                97s                  kubelet          Node no-preload-600301 status is now: NodeReady
	  Normal   Starting                 62s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 62s)    kubelet          Node no-preload-600301 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 62s)    kubelet          Node no-preload-600301 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 62s)    kubelet          Node no-preload-600301 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                  node-controller  Node no-preload-600301 event: Registered Node no-preload-600301 in Controller
	
	
	==> dmesg <==
	[Nov24 03:54] overlayfs: idmapped layers are currently not supported
	[Nov24 03:55] overlayfs: idmapped layers are currently not supported
	[Nov24 03:56] overlayfs: idmapped layers are currently not supported
	[Nov24 03:57] overlayfs: idmapped layers are currently not supported
	[  +3.077077] overlayfs: idmapped layers are currently not supported
	[Nov24 03:58] overlayfs: idmapped layers are currently not supported
	[ +17.825126] overlayfs: idmapped layers are currently not supported
	[Nov24 03:59] overlayfs: idmapped layers are currently not supported
	[ +28.164324] overlayfs: idmapped layers are currently not supported
	[Nov24 04:01] overlayfs: idmapped layers are currently not supported
	[Nov24 04:02] overlayfs: idmapped layers are currently not supported
	[Nov24 04:04] overlayfs: idmapped layers are currently not supported
	[Nov24 04:05] overlayfs: idmapped layers are currently not supported
	[Nov24 04:06] overlayfs: idmapped layers are currently not supported
	[Nov24 04:08] overlayfs: idmapped layers are currently not supported
	[Nov24 04:10] overlayfs: idmapped layers are currently not supported
	[Nov24 04:11] overlayfs: idmapped layers are currently not supported
	[ +23.918932] overlayfs: idmapped layers are currently not supported
	[Nov24 04:12] overlayfs: idmapped layers are currently not supported
	[ +35.202347] overlayfs: idmapped layers are currently not supported
	[Nov24 04:13] overlayfs: idmapped layers are currently not supported
	[Nov24 04:15] overlayfs: idmapped layers are currently not supported
	[ +47.476343] overlayfs: idmapped layers are currently not supported
	[Nov24 04:16] overlayfs: idmapped layers are currently not supported
	[Nov24 04:17] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4ac88f9f47aab0b24c518b68c22f81e6afea8260839ddedaef751e38026bf9d2] <==
	{"level":"warn","ts":"2025-11-24T04:16:47.299543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.323552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.397601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.429653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.467644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.503616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.542705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.572123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.610132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.646059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.695365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.716016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.758822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.759801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.772198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.790576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.813090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.824152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.841306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.865449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.886404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.912420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.933206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.958817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:48.094580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40470","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 04:17:44 up  2:59,  0 user,  load average: 4.53, 3.55, 2.90
	Linux no-preload-600301 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ccc8adf4a0cd356a92cfcf643a0bb1acfc023f9b49443edf4d15961bd8be64fa] <==
	I1124 04:16:50.266302       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 04:16:50.315110       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 04:16:50.315381       1 main.go:148] setting mtu 1500 for CNI 
	I1124 04:16:50.315425       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 04:16:50.315452       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T04:16:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 04:16:50.530739       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 04:16:50.530756       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 04:16:50.530765       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 04:16:50.531051       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 04:17:20.530718       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 04:17:20.530802       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 04:17:20.532020       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 04:17:20.532091       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1124 04:17:21.731528       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 04:17:21.731557       1 metrics.go:72] Registering metrics
	I1124 04:17:21.731611       1 controller.go:711] "Syncing nftables rules"
	I1124 04:17:30.531205       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 04:17:30.531290       1 main.go:301] handling current node
	I1124 04:17:40.530873       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 04:17:40.530905       1 main.go:301] handling current node
	
	
	==> kube-apiserver [55afed455b10e0b92f497f4cc207d5f38895ca7082a005ab16f9c05679590e1b] <==
	I1124 04:16:49.148760       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 04:16:49.148898       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 04:16:49.148925       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 04:16:49.148955       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 04:16:49.148968       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 04:16:49.150339       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1124 04:16:49.150391       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 04:16:49.152772       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 04:16:49.152928       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 04:16:49.153959       1 aggregator.go:171] initial CRD sync complete...
	I1124 04:16:49.153989       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 04:16:49.153996       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 04:16:49.154002       1 cache.go:39] Caches are synced for autoregister controller
	I1124 04:16:49.159301       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 04:16:49.516192       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 04:16:49.760136       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 04:16:49.765058       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 04:16:49.842966       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 04:16:49.902912       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 04:16:49.922402       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 04:16:50.143173       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.119.35"}
	I1124 04:16:50.191118       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.211.209"}
	I1124 04:16:52.774621       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 04:16:52.876086       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 04:16:52.989991       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8f632cb5a12f7dae88e3c60421ff0ab241f680a7b63c4f65bb8eb84499a64e5b] <==
	I1124 04:16:52.383354       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 04:16:52.383373       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 04:16:52.383381       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 04:16:52.388755       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 04:16:52.388765       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 04:16:52.392391       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 04:16:52.392856       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 04:16:52.397198       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 04:16:52.398366       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 04:16:52.400637       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 04:16:52.406839       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 04:16:52.409086       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 04:16:52.416500       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 04:16:52.416909       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 04:16:52.417089       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 04:16:52.418038       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 04:16:52.418087       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 04:16:52.418071       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 04:16:52.418056       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 04:16:52.418791       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 04:16:52.419979       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 04:16:52.422690       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 04:16:52.424848       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 04:16:53.012724       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1124 04:16:53.013130       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [bf628534f2a9c982b95075e67d7f92874661a0aeeb8a0f8d1a25a2b637198bcb] <==
	I1124 04:16:50.503717       1 server_linux.go:53] "Using iptables proxy"
	I1124 04:16:50.628728       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 04:16:50.835562       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 04:16:50.835716       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 04:16:50.835835       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 04:16:50.866723       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 04:16:50.867737       1 server_linux.go:132] "Using iptables Proxier"
	I1124 04:16:50.874799       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 04:16:50.875165       1 server.go:527] "Version info" version="v1.34.1"
	I1124 04:16:50.875410       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:16:50.879237       1 config.go:200] "Starting service config controller"
	I1124 04:16:50.879332       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 04:16:50.879381       1 config.go:106] "Starting endpoint slice config controller"
	I1124 04:16:50.879424       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 04:16:50.879461       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 04:16:50.879495       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 04:16:50.880210       1 config.go:309] "Starting node config controller"
	I1124 04:16:50.880271       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 04:16:50.880304       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 04:16:50.982002       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 04:16:50.982163       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 04:16:50.982519       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ed75a78a04580a5d6c612702f42fb21257b2917a5a53cb5bcaa4a18f5382a8d9] <==
	I1124 04:16:46.625421       1 serving.go:386] Generated self-signed cert in-memory
	W1124 04:16:48.968484       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 04:16:48.968529       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 04:16:48.968549       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 04:16:48.968556       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 04:16:49.094014       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 04:16:49.094047       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:16:49.118584       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 04:16:49.118691       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 04:16:49.118709       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 04:16:49.118724       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 04:16:49.219613       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 04:16:53 no-preload-600301 kubelet[785]: W1124 04:16:53.365035     785 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/49ddc9e82ab9d43934cea4430c4bc27f0a6202c57efb037930223131a0eb594c/crio-d367f2a5d62cb9efd4d5d7fe3da4100db57fc4d8141999b3b462e0c052cb1117 WatchSource:0}: Error finding container d367f2a5d62cb9efd4d5d7fe3da4100db57fc4d8141999b3b462e0c052cb1117: Status 404 returned error can't find the container with id d367f2a5d62cb9efd4d5d7fe3da4100db57fc4d8141999b3b462e0c052cb1117
	Nov 24 04:16:56 no-preload-600301 kubelet[785]: I1124 04:16:56.431595     785 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 24 04:16:57 no-preload-600301 kubelet[785]: I1124 04:16:57.715758     785 scope.go:117] "RemoveContainer" containerID="4298790382f8a407ab5b8225d6273618fdfb54479db5d57698d76c3aa1e0705e"
	Nov 24 04:16:58 no-preload-600301 kubelet[785]: I1124 04:16:58.721136     785 scope.go:117] "RemoveContainer" containerID="4298790382f8a407ab5b8225d6273618fdfb54479db5d57698d76c3aa1e0705e"
	Nov 24 04:16:58 no-preload-600301 kubelet[785]: I1124 04:16:58.721408     785 scope.go:117] "RemoveContainer" containerID="a5a908d84a7f81ba233fb77a69e5cb000916cca6cd564a4bbf4df2488e33a5a6"
	Nov 24 04:16:58 no-preload-600301 kubelet[785]: E1124 04:16:58.721547     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wjlb7_kubernetes-dashboard(70e8e9d2-86da-45e4-9b28-5081999bc4df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wjlb7" podUID="70e8e9d2-86da-45e4-9b28-5081999bc4df"
	Nov 24 04:16:59 no-preload-600301 kubelet[785]: I1124 04:16:59.725895     785 scope.go:117] "RemoveContainer" containerID="a5a908d84a7f81ba233fb77a69e5cb000916cca6cd564a4bbf4df2488e33a5a6"
	Nov 24 04:16:59 no-preload-600301 kubelet[785]: E1124 04:16:59.726098     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wjlb7_kubernetes-dashboard(70e8e9d2-86da-45e4-9b28-5081999bc4df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wjlb7" podUID="70e8e9d2-86da-45e4-9b28-5081999bc4df"
	Nov 24 04:17:03 no-preload-600301 kubelet[785]: I1124 04:17:03.291752     785 scope.go:117] "RemoveContainer" containerID="a5a908d84a7f81ba233fb77a69e5cb000916cca6cd564a4bbf4df2488e33a5a6"
	Nov 24 04:17:03 no-preload-600301 kubelet[785]: E1124 04:17:03.291942     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wjlb7_kubernetes-dashboard(70e8e9d2-86da-45e4-9b28-5081999bc4df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wjlb7" podUID="70e8e9d2-86da-45e4-9b28-5081999bc4df"
	Nov 24 04:17:16 no-preload-600301 kubelet[785]: I1124 04:17:16.578242     785 scope.go:117] "RemoveContainer" containerID="a5a908d84a7f81ba233fb77a69e5cb000916cca6cd564a4bbf4df2488e33a5a6"
	Nov 24 04:17:16 no-preload-600301 kubelet[785]: I1124 04:17:16.792000     785 scope.go:117] "RemoveContainer" containerID="a5a908d84a7f81ba233fb77a69e5cb000916cca6cd564a4bbf4df2488e33a5a6"
	Nov 24 04:17:16 no-preload-600301 kubelet[785]: I1124 04:17:16.792792     785 scope.go:117] "RemoveContainer" containerID="99f8016884ae746c911ab3bcb990ff667a9cdda2f3846eb0fadc7ab56286baf6"
	Nov 24 04:17:16 no-preload-600301 kubelet[785]: E1124 04:17:16.793098     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wjlb7_kubernetes-dashboard(70e8e9d2-86da-45e4-9b28-5081999bc4df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wjlb7" podUID="70e8e9d2-86da-45e4-9b28-5081999bc4df"
	Nov 24 04:17:16 no-preload-600301 kubelet[785]: I1124 04:17:16.819684     785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-q7r6n" podStartSLOduration=12.785788899 podStartE2EDuration="24.819665342s" podCreationTimestamp="2025-11-24 04:16:52 +0000 UTC" firstStartedPulling="2025-11-24 04:16:53.374415703 +0000 UTC m=+11.012717534" lastFinishedPulling="2025-11-24 04:17:05.408292154 +0000 UTC m=+23.046593977" observedRunningTime="2025-11-24 04:17:05.798875905 +0000 UTC m=+23.437177826" watchObservedRunningTime="2025-11-24 04:17:16.819665342 +0000 UTC m=+34.457967165"
	Nov 24 04:17:20 no-preload-600301 kubelet[785]: I1124 04:17:20.806124     785 scope.go:117] "RemoveContainer" containerID="03cb1763245b01261c41affe67d5e6fa8faaad06c4e673e16d63aff37e96298d"
	Nov 24 04:17:23 no-preload-600301 kubelet[785]: I1124 04:17:23.291665     785 scope.go:117] "RemoveContainer" containerID="99f8016884ae746c911ab3bcb990ff667a9cdda2f3846eb0fadc7ab56286baf6"
	Nov 24 04:17:23 no-preload-600301 kubelet[785]: E1124 04:17:23.291848     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wjlb7_kubernetes-dashboard(70e8e9d2-86da-45e4-9b28-5081999bc4df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wjlb7" podUID="70e8e9d2-86da-45e4-9b28-5081999bc4df"
	Nov 24 04:17:37 no-preload-600301 kubelet[785]: I1124 04:17:37.577855     785 scope.go:117] "RemoveContainer" containerID="99f8016884ae746c911ab3bcb990ff667a9cdda2f3846eb0fadc7ab56286baf6"
	Nov 24 04:17:37 no-preload-600301 kubelet[785]: I1124 04:17:37.853821     785 scope.go:117] "RemoveContainer" containerID="99f8016884ae746c911ab3bcb990ff667a9cdda2f3846eb0fadc7ab56286baf6"
	Nov 24 04:17:37 no-preload-600301 kubelet[785]: I1124 04:17:37.854335     785 scope.go:117] "RemoveContainer" containerID="7d2caa78e54e80be1a8a82757ad06b9829a7067b9809a00be95c8bb15d19514b"
	Nov 24 04:17:37 no-preload-600301 kubelet[785]: E1124 04:17:37.854684     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wjlb7_kubernetes-dashboard(70e8e9d2-86da-45e4-9b28-5081999bc4df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wjlb7" podUID="70e8e9d2-86da-45e4-9b28-5081999bc4df"
	Nov 24 04:17:41 no-preload-600301 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 04:17:41 no-preload-600301 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 04:17:41 no-preload-600301 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [4a692b478ef7ae43e58f4c41e564623fcf774e3807a46373ba7ce091dea7cfdc] <==
	2025/11/24 04:17:05 Using namespace: kubernetes-dashboard
	2025/11/24 04:17:05 Using in-cluster config to connect to apiserver
	2025/11/24 04:17:05 Using secret token for csrf signing
	2025/11/24 04:17:05 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 04:17:05 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 04:17:05 Successful initial request to the apiserver, version: v1.34.1
	2025/11/24 04:17:05 Generating JWE encryption key
	2025/11/24 04:17:05 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 04:17:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 04:17:05 Initializing JWE encryption key from synchronized object
	2025/11/24 04:17:05 Creating in-cluster Sidecar client
	2025/11/24 04:17:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 04:17:05 Serving insecurely on HTTP port: 9090
	2025/11/24 04:17:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 04:17:05 Starting overwatch
	
	
	==> storage-provisioner [03cb1763245b01261c41affe67d5e6fa8faaad06c4e673e16d63aff37e96298d] <==
	I1124 04:16:50.474136       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 04:17:20.476869       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [77afd0d3ea194d2cc191291986d2a24265aa9a172c1bfb35d2a19cd74ae1b0b1] <==
	I1124 04:17:20.867184       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 04:17:20.890563       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 04:17:20.890716       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 04:17:20.893973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:17:24.349227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:17:28.611915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:17:32.210198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:17:35.264232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:17:38.285924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:17:38.292507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 04:17:38.292643       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 04:17:38.292793       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-600301_45fa2777-a0cf-4b1a-9b53-ff6b2b9bbd73!
	I1124 04:17:38.293633       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"158a062c-c5ad-4735-ae08-e89f4d9cb5f4", APIVersion:"v1", ResourceVersion:"683", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-600301_45fa2777-a0cf-4b1a-9b53-ff6b2b9bbd73 became leader
	W1124 04:17:38.302379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:17:38.309528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 04:17:38.393863       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-600301_45fa2777-a0cf-4b1a-9b53-ff6b2b9bbd73!
	W1124 04:17:40.314723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:17:40.329211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:17:42.337674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:17:42.344696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:17:44.348063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:17:44.353712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-600301 -n no-preload-600301
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-600301 -n no-preload-600301: exit status 2 (414.857485ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-600301 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-600301
helpers_test.go:243: (dbg) docker inspect no-preload-600301:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49ddc9e82ab9d43934cea4430c4bc27f0a6202c57efb037930223131a0eb594c",
	        "Created": "2025-11-24T04:14:55.518156491Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 484424,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T04:16:35.574657188Z",
	            "FinishedAt": "2025-11-24T04:16:34.729941312Z"
	        },
	        "Image": "sha256:fbb44bc62521f331457dff002aaa5e1e27856f9e53853b3b3ee62969be454028",
	        "ResolvConfPath": "/var/lib/docker/containers/49ddc9e82ab9d43934cea4430c4bc27f0a6202c57efb037930223131a0eb594c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49ddc9e82ab9d43934cea4430c4bc27f0a6202c57efb037930223131a0eb594c/hostname",
	        "HostsPath": "/var/lib/docker/containers/49ddc9e82ab9d43934cea4430c4bc27f0a6202c57efb037930223131a0eb594c/hosts",
	        "LogPath": "/var/lib/docker/containers/49ddc9e82ab9d43934cea4430c4bc27f0a6202c57efb037930223131a0eb594c/49ddc9e82ab9d43934cea4430c4bc27f0a6202c57efb037930223131a0eb594c-json.log",
	        "Name": "/no-preload-600301",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-600301:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-600301",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "49ddc9e82ab9d43934cea4430c4bc27f0a6202c57efb037930223131a0eb594c",
	                "LowerDir": "/var/lib/docker/overlay2/eef5958de4b0cc15d3cf1c85d274e91ca573dec4105ed431ccc177b754c82fbb-init/diff:/var/lib/docker/overlay2/d0aa28be488ed1454a92ae6ce1d851cbc3e880fd8a322d63788b1835a346ec13/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eef5958de4b0cc15d3cf1c85d274e91ca573dec4105ed431ccc177b754c82fbb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eef5958de4b0cc15d3cf1c85d274e91ca573dec4105ed431ccc177b754c82fbb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eef5958de4b0cc15d3cf1c85d274e91ca573dec4105ed431ccc177b754c82fbb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-600301",
	                "Source": "/var/lib/docker/volumes/no-preload-600301/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-600301",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-600301",
	                "name.minikube.sigs.k8s.io": "no-preload-600301",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f957bbfbd5e9eab86207bf15019237662b97a752bbdb3f548bee9e85a6ee5033",
	            "SandboxKey": "/var/run/docker/netns/f957bbfbd5e9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-600301": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:df:f6:a5:2a:f3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ebf72ee754bee872530e47e2d8a7a6196e915259be85acc5eb692aa3f4588a34",
	                    "EndpointID": "73f0e4d92ed69df758b643738f8a7b48104661f5e692a29043181455db589222",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-600301",
	                        "49ddc9e82ab9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-600301 -n no-preload-600301
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-600301 -n no-preload-600301: exit status 2 (399.189186ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-600301 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-600301 logs -n 25: (1.319824937s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-967682 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-967682    │ jenkins │ v1.37.0 │ 24 Nov 25 04:12 UTC │ 24 Nov 25 04:12 UTC │
	│ delete  │ -p cert-options-967682                                                                                                                                                                                                                        │ cert-options-967682    │ jenkins │ v1.37.0 │ 24 Nov 25 04:12 UTC │ 24 Nov 25 04:12 UTC │
	│ start   │ -p old-k8s-version-762702 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-762702 │ jenkins │ v1.37.0 │ 24 Nov 25 04:12 UTC │ 24 Nov 25 04:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-762702 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-762702 │ jenkins │ v1.37.0 │ 24 Nov 25 04:13 UTC │                     │
	│ stop    │ -p old-k8s-version-762702 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-762702 │ jenkins │ v1.37.0 │ 24 Nov 25 04:13 UTC │ 24 Nov 25 04:13 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-762702 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-762702 │ jenkins │ v1.37.0 │ 24 Nov 25 04:13 UTC │ 24 Nov 25 04:13 UTC │
	│ start   │ -p old-k8s-version-762702 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-762702 │ jenkins │ v1.37.0 │ 24 Nov 25 04:13 UTC │ 24 Nov 25 04:14 UTC │
	│ image   │ old-k8s-version-762702 image list --format=json                                                                                                                                                                                               │ old-k8s-version-762702 │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:14 UTC │
	│ pause   │ -p old-k8s-version-762702 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-762702 │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │                     │
	│ delete  │ -p old-k8s-version-762702                                                                                                                                                                                                                     │ old-k8s-version-762702 │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:14 UTC │
	│ delete  │ -p old-k8s-version-762702                                                                                                                                                                                                                     │ old-k8s-version-762702 │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:14 UTC │
	│ start   │ -p no-preload-600301 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-600301      │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:16 UTC │
	│ start   │ -p cert-expiration-918798 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-918798 │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:15 UTC │
	│ delete  │ -p cert-expiration-918798                                                                                                                                                                                                                     │ cert-expiration-918798 │ jenkins │ v1.37.0 │ 24 Nov 25 04:15 UTC │ 24 Nov 25 04:15 UTC │
	│ start   │ -p embed-certs-520529 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-520529     │ jenkins │ v1.37.0 │ 24 Nov 25 04:15 UTC │ 24 Nov 25 04:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-600301 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-600301      │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │                     │
	│ stop    │ -p no-preload-600301 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-600301      │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │ 24 Nov 25 04:16 UTC │
	│ addons  │ enable dashboard -p no-preload-600301 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-600301      │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │ 24 Nov 25 04:16 UTC │
	│ start   │ -p no-preload-600301 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-600301      │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │ 24 Nov 25 04:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-520529 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-520529     │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │                     │
	│ stop    │ -p embed-certs-520529 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-520529     │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-520529 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-520529     │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ start   │ -p embed-certs-520529 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-520529     │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │                     │
	│ image   │ no-preload-600301 image list --format=json                                                                                                                                                                                                    │ no-preload-600301      │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ pause   │ -p no-preload-600301 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-600301      │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 04:17:16
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 04:17:16.821639  487285 out.go:360] Setting OutFile to fd 1 ...
	I1124 04:17:16.821754  487285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:17:16.821765  487285 out.go:374] Setting ErrFile to fd 2...
	I1124 04:17:16.821770  487285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:17:16.822022  487285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 04:17:16.822396  487285 out.go:368] Setting JSON to false
	I1124 04:17:16.823348  487285 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10766,"bootTime":1763947071,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 04:17:16.823418  487285 start.go:143] virtualization:  
	I1124 04:17:16.827309  487285 out.go:179] * [embed-certs-520529] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 04:17:16.831228  487285 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 04:17:16.831347  487285 notify.go:221] Checking for updates...
	I1124 04:17:16.837485  487285 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 04:17:16.839709  487285 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:17:16.842653  487285 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	I1124 04:17:16.845494  487285 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 04:17:16.848503  487285 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 04:17:16.851901  487285 config.go:182] Loaded profile config "embed-certs-520529": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:17:16.852436  487285 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 04:17:16.875306  487285 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 04:17:16.875424  487285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:17:16.940510  487285 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 04:17:16.931395912 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:17:16.940614  487285 docker.go:319] overlay module found
	I1124 04:17:16.945661  487285 out.go:179] * Using the docker driver based on existing profile
	I1124 04:17:16.948405  487285 start.go:309] selected driver: docker
	I1124 04:17:16.948428  487285 start.go:927] validating driver "docker" against &{Name:embed-certs-520529 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-520529 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:17:16.948569  487285 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 04:17:16.949293  487285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:17:17.016148  487285 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 04:17:17.006397183 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:17:17.016501  487285 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 04:17:17.016536  487285 cni.go:84] Creating CNI manager for ""
	I1124 04:17:17.016599  487285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:17:17.016645  487285 start.go:353] cluster config:
	{Name:embed-certs-520529 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-520529 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:17:17.021816  487285 out.go:179] * Starting "embed-certs-520529" primary control-plane node in "embed-certs-520529" cluster
	I1124 04:17:17.024752  487285 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 04:17:17.027814  487285 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 04:17:17.030761  487285 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 04:17:17.030930  487285 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:17:17.030959  487285 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 04:17:17.030973  487285 cache.go:65] Caching tarball of preloaded images
	I1124 04:17:17.031052  487285 preload.go:238] Found /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 04:17:17.031068  487285 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 04:17:17.031179  487285 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/config.json ...
	I1124 04:17:17.051723  487285 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 04:17:17.051757  487285 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 04:17:17.051773  487285 cache.go:243] Successfully downloaded all kic artifacts
	I1124 04:17:17.051801  487285 start.go:360] acquireMachinesLock for embed-certs-520529: {Name:mk545d2cd105b23ef8983ff95cd892d06612a01e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 04:17:17.051865  487285 start.go:364] duration metric: took 38.072µs to acquireMachinesLock for "embed-certs-520529"
	I1124 04:17:17.051888  487285 start.go:96] Skipping create...Using existing machine configuration
	I1124 04:17:17.051893  487285 fix.go:54] fixHost starting: 
	I1124 04:17:17.052159  487285 cli_runner.go:164] Run: docker container inspect embed-certs-520529 --format={{.State.Status}}
	I1124 04:17:17.068772  487285 fix.go:112] recreateIfNeeded on embed-certs-520529: state=Stopped err=<nil>
	W1124 04:17:17.068805  487285 fix.go:138] unexpected machine state, will restart: <nil>
	W1124 04:17:15.387433  484296 pod_ready.go:104] pod "coredns-66bc5c9577-x6vx6" is not "Ready", error: <nil>
	W1124 04:17:17.389029  484296 pod_ready.go:104] pod "coredns-66bc5c9577-x6vx6" is not "Ready", error: <nil>
	W1124 04:17:19.887228  484296 pod_ready.go:104] pod "coredns-66bc5c9577-x6vx6" is not "Ready", error: <nil>
	I1124 04:17:17.072018  487285 out.go:252] * Restarting existing docker container for "embed-certs-520529" ...
	I1124 04:17:17.072101  487285 cli_runner.go:164] Run: docker start embed-certs-520529
	I1124 04:17:17.314331  487285 cli_runner.go:164] Run: docker container inspect embed-certs-520529 --format={{.State.Status}}
	I1124 04:17:17.334500  487285 kic.go:430] container "embed-certs-520529" state is running.
	I1124 04:17:17.334883  487285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-520529
	I1124 04:17:17.358165  487285 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/config.json ...
	I1124 04:17:17.358414  487285 machine.go:94] provisionDockerMachine start ...
	I1124 04:17:17.358572  487285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:17:17.381123  487285 main.go:143] libmachine: Using SSH client type: native
	I1124 04:17:17.381453  487285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33446 <nil> <nil>}
	I1124 04:17:17.381462  487285 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 04:17:17.383710  487285 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 04:17:20.538110  487285 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-520529
	
	I1124 04:17:20.538134  487285 ubuntu.go:182] provisioning hostname "embed-certs-520529"
	I1124 04:17:20.538208  487285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:17:20.556338  487285 main.go:143] libmachine: Using SSH client type: native
	I1124 04:17:20.556653  487285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33446 <nil> <nil>}
	I1124 04:17:20.556672  487285 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-520529 && echo "embed-certs-520529" | sudo tee /etc/hostname
	I1124 04:17:20.715660  487285 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-520529
	
	I1124 04:17:20.715814  487285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:17:20.732891  487285 main.go:143] libmachine: Using SSH client type: native
	I1124 04:17:20.733206  487285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33446 <nil> <nil>}
	I1124 04:17:20.733224  487285 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-520529' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-520529/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-520529' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 04:17:20.882848  487285 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 04:17:20.882925  487285 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-289526/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-289526/.minikube}
	I1124 04:17:20.882971  487285 ubuntu.go:190] setting up certificates
	I1124 04:17:20.883003  487285 provision.go:84] configureAuth start
	I1124 04:17:20.883089  487285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-520529
	I1124 04:17:20.907407  487285 provision.go:143] copyHostCerts
	I1124 04:17:20.907477  487285 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem, removing ...
	I1124 04:17:20.907491  487285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem
	I1124 04:17:20.907568  487285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem (1675 bytes)
	I1124 04:17:20.907726  487285 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem, removing ...
	I1124 04:17:20.907732  487285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem
	I1124 04:17:20.907759  487285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem (1082 bytes)
	I1124 04:17:20.907817  487285 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem, removing ...
	I1124 04:17:20.907822  487285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem
	I1124 04:17:20.907845  487285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem (1123 bytes)
	I1124 04:17:20.907898  487285 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem org=jenkins.embed-certs-520529 san=[127.0.0.1 192.168.76.2 embed-certs-520529 localhost minikube]
	I1124 04:17:21.236461  487285 provision.go:177] copyRemoteCerts
	I1124 04:17:21.236568  487285 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 04:17:21.236646  487285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:17:21.254329  487285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/embed-certs-520529/id_rsa Username:docker}
	I1124 04:17:21.358563  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 04:17:21.377182  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 04:17:21.400325  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 04:17:21.418587  487285 provision.go:87] duration metric: took 535.559321ms to configureAuth
	I1124 04:17:21.418659  487285 ubuntu.go:206] setting minikube options for container-runtime
	I1124 04:17:21.418881  487285 config.go:182] Loaded profile config "embed-certs-520529": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:17:21.418986  487285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:17:21.436723  487285 main.go:143] libmachine: Using SSH client type: native
	I1124 04:17:21.437051  487285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33446 <nil> <nil>}
	I1124 04:17:21.437071  487285 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 04:17:21.848529  487285 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 04:17:21.848558  487285 machine.go:97] duration metric: took 4.490133168s to provisionDockerMachine
	I1124 04:17:21.848576  487285 start.go:293] postStartSetup for "embed-certs-520529" (driver="docker")
	I1124 04:17:21.848593  487285 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 04:17:21.848677  487285 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 04:17:21.848761  487285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:17:21.877873  487285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/embed-certs-520529/id_rsa Username:docker}
	I1124 04:17:21.995349  487285 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 04:17:21.999721  487285 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 04:17:21.999748  487285 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 04:17:21.999762  487285 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/addons for local assets ...
	I1124 04:17:21.999833  487285 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/files for local assets ...
	I1124 04:17:21.999920  487285 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem -> 2913892.pem in /etc/ssl/certs
	I1124 04:17:22.000057  487285 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 04:17:22.016049  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:17:22.037679  487285 start.go:296] duration metric: took 189.085294ms for postStartSetup
	I1124 04:17:22.037770  487285 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 04:17:22.037818  487285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:17:22.056186  487285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/embed-certs-520529/id_rsa Username:docker}
	I1124 04:17:22.159983  487285 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 04:17:22.165197  487285 fix.go:56] duration metric: took 5.113296211s for fixHost
	I1124 04:17:22.165235  487285 start.go:83] releasing machines lock for "embed-certs-520529", held for 5.113357816s
	I1124 04:17:22.165317  487285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-520529
	I1124 04:17:22.182843  487285 ssh_runner.go:195] Run: cat /version.json
	I1124 04:17:22.182893  487285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:17:22.182906  487285 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 04:17:22.182969  487285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:17:22.203017  487285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/embed-certs-520529/id_rsa Username:docker}
	I1124 04:17:22.208049  487285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/embed-certs-520529/id_rsa Username:docker}
	I1124 04:17:22.389450  487285 ssh_runner.go:195] Run: systemctl --version
	I1124 04:17:22.397826  487285 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 04:17:22.440314  487285 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 04:17:22.445340  487285 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 04:17:22.445430  487285 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 04:17:22.453778  487285 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 04:17:22.453805  487285 start.go:496] detecting cgroup driver to use...
	I1124 04:17:22.453839  487285 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 04:17:22.453887  487285 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 04:17:22.471850  487285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 04:17:22.485073  487285 docker.go:218] disabling cri-docker service (if available) ...
	I1124 04:17:22.485137  487285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 04:17:22.500482  487285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 04:17:22.516644  487285 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 04:17:22.661370  487285 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 04:17:22.787772  487285 docker.go:234] disabling docker service ...
	I1124 04:17:22.787884  487285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 04:17:22.804734  487285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 04:17:22.819858  487285 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 04:17:22.945794  487285 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 04:17:23.071863  487285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 04:17:23.086611  487285 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 04:17:23.100975  487285 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 04:17:23.101083  487285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:17:23.110326  487285 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 04:17:23.110446  487285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:17:23.119651  487285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:17:23.128992  487285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:17:23.137970  487285 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 04:17:23.146821  487285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:17:23.155694  487285 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:17:23.164493  487285 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:17:23.173768  487285 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 04:17:23.181483  487285 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 04:17:23.188957  487285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:17:23.298223  487285 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 04:17:23.474289  487285 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 04:17:23.474357  487285 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 04:17:23.478134  487285 start.go:564] Will wait 60s for crictl version
	I1124 04:17:23.478196  487285 ssh_runner.go:195] Run: which crictl
	I1124 04:17:23.481684  487285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 04:17:23.510655  487285 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 04:17:23.510757  487285 ssh_runner.go:195] Run: crio --version
	I1124 04:17:23.545279  487285 ssh_runner.go:195] Run: crio --version
	I1124 04:17:23.578208  487285 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 04:17:23.581037  487285 cli_runner.go:164] Run: docker network inspect embed-certs-520529 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 04:17:23.596941  487285 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 04:17:23.600821  487285 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:17:23.610297  487285 kubeadm.go:884] updating cluster {Name:embed-certs-520529 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-520529 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 04:17:23.610432  487285 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:17:23.610541  487285 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:17:23.646693  487285 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:17:23.646721  487285 crio.go:433] Images already preloaded, skipping extraction
	I1124 04:17:23.646782  487285 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:17:23.677325  487285 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:17:23.677350  487285 cache_images.go:86] Images are preloaded, skipping loading
	I1124 04:17:23.677359  487285 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1124 04:17:23.677489  487285 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-520529 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-520529 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 04:17:23.677582  487285 ssh_runner.go:195] Run: crio config
	I1124 04:17:23.742616  487285 cni.go:84] Creating CNI manager for ""
	I1124 04:17:23.742639  487285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:17:23.742663  487285 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 04:17:23.742880  487285 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-520529 NodeName:embed-certs-520529 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 04:17:23.743050  487285 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-520529"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 04:17:23.743130  487285 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 04:17:23.753983  487285 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 04:17:23.754104  487285 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 04:17:23.762893  487285 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1124 04:17:23.775306  487285 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 04:17:23.788941  487285 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1124 04:17:23.802501  487285 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 04:17:23.806192  487285 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:17:23.817709  487285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:17:23.944632  487285 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:17:23.960961  487285 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529 for IP: 192.168.76.2
	I1124 04:17:23.961029  487285 certs.go:195] generating shared ca certs ...
	I1124 04:17:23.961073  487285 certs.go:227] acquiring lock for ca certs: {Name:mk13d1a72b5c7901cf99d88e558f62c6b8512807 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:17:23.961269  487285 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key
	I1124 04:17:23.961358  487285 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key
	I1124 04:17:23.961382  487285 certs.go:257] generating profile certs ...
	I1124 04:17:23.961519  487285 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/client.key
	I1124 04:17:23.961640  487285 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/apiserver.key.be55c4bc
	I1124 04:17:23.961729  487285 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/proxy-client.key
	I1124 04:17:23.961882  487285 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem (1338 bytes)
	W1124 04:17:23.961953  487285 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389_empty.pem, impossibly tiny 0 bytes
	I1124 04:17:23.961981  487285 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 04:17:23.962051  487285 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem (1082 bytes)
	I1124 04:17:23.962107  487285 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem (1123 bytes)
	I1124 04:17:23.962171  487285 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem (1675 bytes)
	I1124 04:17:23.962259  487285 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:17:23.963107  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 04:17:23.988815  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 04:17:24.010159  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 04:17:24.030894  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 04:17:24.050281  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 04:17:24.070928  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 04:17:24.095322  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 04:17:24.119612  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/embed-certs-520529/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 04:17:24.138585  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 04:17:24.163315  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem --> /usr/share/ca-certificates/291389.pem (1338 bytes)
	I1124 04:17:24.181981  487285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /usr/share/ca-certificates/2913892.pem (1708 bytes)
	I1124 04:17:24.205736  487285 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 04:17:24.223519  487285 ssh_runner.go:195] Run: openssl version
	I1124 04:17:24.232069  487285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2913892.pem && ln -fs /usr/share/ca-certificates/2913892.pem /etc/ssl/certs/2913892.pem"
	I1124 04:17:24.243363  487285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2913892.pem
	I1124 04:17:24.248165  487285 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 03:20 /usr/share/ca-certificates/2913892.pem
	I1124 04:17:24.248305  487285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2913892.pem
	I1124 04:17:24.291519  487285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2913892.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 04:17:24.300440  487285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 04:17:24.309245  487285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:17:24.313299  487285 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 03:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:17:24.313400  487285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:17:24.358281  487285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 04:17:24.366397  487285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291389.pem && ln -fs /usr/share/ca-certificates/291389.pem /etc/ssl/certs/291389.pem"
	I1124 04:17:24.374656  487285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291389.pem
	I1124 04:17:24.378273  487285 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 03:20 /usr/share/ca-certificates/291389.pem
	I1124 04:17:24.378352  487285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291389.pem
	I1124 04:17:24.420539  487285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291389.pem /etc/ssl/certs/51391683.0"
	I1124 04:17:24.428597  487285 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 04:17:24.432523  487285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 04:17:24.473749  487285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 04:17:24.516402  487285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 04:17:24.557902  487285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 04:17:24.601469  487285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 04:17:24.643617  487285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 04:17:24.689140  487285 kubeadm.go:401] StartCluster: {Name:embed-certs-520529 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-520529 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:17:24.689296  487285 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 04:17:24.689399  487285 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 04:17:24.742905  487285 cri.go:89] found id: ""
	I1124 04:17:24.743040  487285 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 04:17:24.755200  487285 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 04:17:24.755258  487285 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 04:17:24.755375  487285 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 04:17:24.767918  487285 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 04:17:24.768607  487285 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-520529" does not appear in /home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:17:24.768938  487285 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-289526/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-520529" cluster setting kubeconfig missing "embed-certs-520529" context setting]
	I1124 04:17:24.769442  487285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/kubeconfig: {Name:mkb927d54c1c5489b1562496c31c4a11f46f8c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:17:24.771280  487285 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 04:17:24.783337  487285 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1124 04:17:24.783449  487285 kubeadm.go:602] duration metric: took 28.135409ms to restartPrimaryControlPlane
	I1124 04:17:24.783478  487285 kubeadm.go:403] duration metric: took 94.348883ms to StartCluster
	I1124 04:17:24.783508  487285 settings.go:142] acquiring lock: {Name:mkbb4fe81234aed150705ba63a85f83232f71688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:17:24.783617  487285 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:17:24.785019  487285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/kubeconfig: {Name:mkb927d54c1c5489b1562496c31c4a11f46f8c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:17:24.785449  487285 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 04:17:24.785828  487285 config.go:182] Loaded profile config "embed-certs-520529": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:17:24.785911  487285 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 04:17:24.786068  487285 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-520529"
	I1124 04:17:24.786114  487285 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-520529"
	W1124 04:17:24.786135  487285 addons.go:248] addon storage-provisioner should already be in state true
	I1124 04:17:24.786189  487285 host.go:66] Checking if "embed-certs-520529" exists ...
	I1124 04:17:24.786804  487285 cli_runner.go:164] Run: docker container inspect embed-certs-520529 --format={{.State.Status}}
	I1124 04:17:24.787006  487285 addons.go:70] Setting dashboard=true in profile "embed-certs-520529"
	I1124 04:17:24.787043  487285 addons.go:239] Setting addon dashboard=true in "embed-certs-520529"
	W1124 04:17:24.787083  487285 addons.go:248] addon dashboard should already be in state true
	I1124 04:17:24.787124  487285 host.go:66] Checking if "embed-certs-520529" exists ...
	I1124 04:17:24.787338  487285 addons.go:70] Setting default-storageclass=true in profile "embed-certs-520529"
	I1124 04:17:24.787374  487285 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-520529"
	I1124 04:17:24.787668  487285 cli_runner.go:164] Run: docker container inspect embed-certs-520529 --format={{.State.Status}}
	I1124 04:17:24.787670  487285 cli_runner.go:164] Run: docker container inspect embed-certs-520529 --format={{.State.Status}}
	I1124 04:17:24.795655  487285 out.go:179] * Verifying Kubernetes components...
	I1124 04:17:24.803130  487285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:17:24.821508  487285 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 04:17:24.829902  487285 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:17:24.829937  487285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 04:17:24.830003  487285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:17:24.858430  487285 addons.go:239] Setting addon default-storageclass=true in "embed-certs-520529"
	W1124 04:17:24.858539  487285 addons.go:248] addon default-storageclass should already be in state true
	I1124 04:17:24.858568  487285 host.go:66] Checking if "embed-certs-520529" exists ...
	I1124 04:17:24.870364  487285 cli_runner.go:164] Run: docker container inspect embed-certs-520529 --format={{.State.Status}}
	I1124 04:17:24.876994  487285 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 04:17:24.883433  487285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/embed-certs-520529/id_rsa Username:docker}
	I1124 04:17:24.891091  487285 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1124 04:17:21.887388  484296 pod_ready.go:104] pod "coredns-66bc5c9577-x6vx6" is not "Ready", error: <nil>
	W1124 04:17:23.887589  484296 pod_ready.go:104] pod "coredns-66bc5c9577-x6vx6" is not "Ready", error: <nil>
	I1124 04:17:24.894877  487285 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 04:17:24.894900  487285 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 04:17:24.894964  487285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:17:24.913230  487285 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 04:17:24.913251  487285 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 04:17:24.913318  487285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:17:24.950690  487285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/embed-certs-520529/id_rsa Username:docker}
	I1124 04:17:24.963617  487285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/embed-certs-520529/id_rsa Username:docker}
	I1124 04:17:25.157380  487285 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:17:25.169298  487285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:17:25.197726  487285 node_ready.go:35] waiting up to 6m0s for node "embed-certs-520529" to be "Ready" ...
	I1124 04:17:25.321041  487285 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 04:17:25.321067  487285 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 04:17:25.330312  487285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 04:17:25.401347  487285 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 04:17:25.401373  487285 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 04:17:25.476935  487285 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 04:17:25.476975  487285 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 04:17:25.548322  487285 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 04:17:25.548346  487285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 04:17:25.564682  487285 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 04:17:25.564707  487285 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 04:17:25.580240  487285 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 04:17:25.580265  487285 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 04:17:25.597243  487285 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 04:17:25.597270  487285 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 04:17:25.611947  487285 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 04:17:25.611972  487285 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 04:17:25.627263  487285 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 04:17:25.627294  487285 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 04:17:25.643335  487285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1124 04:17:26.386329  484296 pod_ready.go:104] pod "coredns-66bc5c9577-x6vx6" is not "Ready", error: <nil>
	I1124 04:17:26.887064  484296 pod_ready.go:94] pod "coredns-66bc5c9577-x6vx6" is "Ready"
	I1124 04:17:26.887088  484296 pod_ready.go:86] duration metric: took 36.005702344s for pod "coredns-66bc5c9577-x6vx6" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:17:26.890128  484296 pod_ready.go:83] waiting for pod "etcd-no-preload-600301" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:17:26.894905  484296 pod_ready.go:94] pod "etcd-no-preload-600301" is "Ready"
	I1124 04:17:26.894983  484296 pod_ready.go:86] duration metric: took 4.830181ms for pod "etcd-no-preload-600301" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:17:26.897468  484296 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-600301" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:17:26.904062  484296 pod_ready.go:94] pod "kube-apiserver-no-preload-600301" is "Ready"
	I1124 04:17:26.904136  484296 pod_ready.go:86] duration metric: took 6.59831ms for pod "kube-apiserver-no-preload-600301" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:17:26.906358  484296 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-600301" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:17:27.084739  484296 pod_ready.go:94] pod "kube-controller-manager-no-preload-600301" is "Ready"
	I1124 04:17:27.084764  484296 pod_ready.go:86] duration metric: took 178.336372ms for pod "kube-controller-manager-no-preload-600301" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:17:27.284601  484296 pod_ready.go:83] waiting for pod "kube-proxy-bzg2j" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:17:27.684529  484296 pod_ready.go:94] pod "kube-proxy-bzg2j" is "Ready"
	I1124 04:17:27.684554  484296 pod_ready.go:86] duration metric: took 399.929244ms for pod "kube-proxy-bzg2j" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:17:27.884663  484296 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-600301" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:17:28.284893  484296 pod_ready.go:94] pod "kube-scheduler-no-preload-600301" is "Ready"
	I1124 04:17:28.284970  484296 pod_ready.go:86] duration metric: took 400.230695ms for pod "kube-scheduler-no-preload-600301" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:17:28.285001  484296 pod_ready.go:40] duration metric: took 37.407980948s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 04:17:28.369480  484296 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 04:17:28.373709  484296 out.go:179] * Done! kubectl is now configured to use "no-preload-600301" cluster and "default" namespace by default
	I1124 04:17:29.589988  487285 node_ready.go:49] node "embed-certs-520529" is "Ready"
	I1124 04:17:29.590078  487285 node_ready.go:38] duration metric: took 4.392298505s for node "embed-certs-520529" to be "Ready" ...
	I1124 04:17:29.590109  487285 api_server.go:52] waiting for apiserver process to appear ...
	I1124 04:17:29.590197  487285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 04:17:31.046367  487285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.87703251s)
	I1124 04:17:31.046433  487285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.716096896s)
	I1124 04:17:31.099223  487285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.45580834s)
	I1124 04:17:31.099523  487285 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.509286148s)
	I1124 04:17:31.099588  487285 api_server.go:72] duration metric: took 6.314061602s to wait for apiserver process to appear ...
	I1124 04:17:31.099609  487285 api_server.go:88] waiting for apiserver healthz status ...
	I1124 04:17:31.099656  487285 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 04:17:31.102507  487285 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-520529 addons enable metrics-server
	
	I1124 04:17:31.105947  487285 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1124 04:17:31.108881  487285 addons.go:530] duration metric: took 6.3229639s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1124 04:17:31.119640  487285 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 04:17:31.119722  487285 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 04:17:31.600205  487285 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 04:17:31.609542  487285 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1124 04:17:31.610846  487285 api_server.go:141] control plane version: v1.34.1
	I1124 04:17:31.610904  487285 api_server.go:131] duration metric: took 511.262676ms to wait for apiserver health ...
	I1124 04:17:31.610942  487285 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 04:17:31.615224  487285 system_pods.go:59] 8 kube-system pods found
	I1124 04:17:31.615308  487285 system_pods.go:61] "coredns-66bc5c9577-bvwhr" [afc820fb-a24a-4fb0-b2c9-8c5e2014a762] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:17:31.615334  487285 system_pods.go:61] "etcd-embed-certs-520529" [f26ae428-2218-4bda-9b92-578d85c74df8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 04:17:31.615386  487285 system_pods.go:61] "kindnet-tkncp" [eccdf0bd-3245-4547-aed3-65ae2e72ed82] Running
	I1124 04:17:31.615414  487285 system_pods.go:61] "kube-apiserver-embed-certs-520529" [d25fe462-1c8b-467f-8c81-4610bd9173c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 04:17:31.615441  487285 system_pods.go:61] "kube-controller-manager-embed-certs-520529" [093b15ed-2629-4f07-aacb-21da8fe15032] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 04:17:31.615477  487285 system_pods.go:61] "kube-proxy-dt4th" [47798ce5-c1f5-4f74-a933-76514aee25a3] Running
	I1124 04:17:31.615501  487285 system_pods.go:61] "kube-scheduler-embed-certs-520529" [2a37b8ab-a5c8-45f3-9bc9-3e233a33c05d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 04:17:31.615538  487285 system_pods.go:61] "storage-provisioner" [bad7a9be-48f5-443b-824e-859f9e21d194] Running
	I1124 04:17:31.615562  487285 system_pods.go:74] duration metric: took 4.597957ms to wait for pod list to return data ...
	I1124 04:17:31.615583  487285 default_sa.go:34] waiting for default service account to be created ...
	I1124 04:17:31.618315  487285 default_sa.go:45] found service account: "default"
	I1124 04:17:31.618373  487285 default_sa.go:55] duration metric: took 2.756046ms for default service account to be created ...
	I1124 04:17:31.618411  487285 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 04:17:31.623799  487285 system_pods.go:86] 8 kube-system pods found
	I1124 04:17:31.623880  487285 system_pods.go:89] "coredns-66bc5c9577-bvwhr" [afc820fb-a24a-4fb0-b2c9-8c5e2014a762] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:17:31.623906  487285 system_pods.go:89] "etcd-embed-certs-520529" [f26ae428-2218-4bda-9b92-578d85c74df8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 04:17:31.623946  487285 system_pods.go:89] "kindnet-tkncp" [eccdf0bd-3245-4547-aed3-65ae2e72ed82] Running
	I1124 04:17:31.623977  487285 system_pods.go:89] "kube-apiserver-embed-certs-520529" [d25fe462-1c8b-467f-8c81-4610bd9173c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 04:17:31.624000  487285 system_pods.go:89] "kube-controller-manager-embed-certs-520529" [093b15ed-2629-4f07-aacb-21da8fe15032] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 04:17:31.624036  487285 system_pods.go:89] "kube-proxy-dt4th" [47798ce5-c1f5-4f74-a933-76514aee25a3] Running
	I1124 04:17:31.624065  487285 system_pods.go:89] "kube-scheduler-embed-certs-520529" [2a37b8ab-a5c8-45f3-9bc9-3e233a33c05d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 04:17:31.624088  487285 system_pods.go:89] "storage-provisioner" [bad7a9be-48f5-443b-824e-859f9e21d194] Running
	I1124 04:17:31.624125  487285 system_pods.go:126] duration metric: took 5.689572ms to wait for k8s-apps to be running ...
	I1124 04:17:31.624152  487285 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 04:17:31.624237  487285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:17:31.638826  487285 system_svc.go:56] duration metric: took 14.666965ms WaitForService to wait for kubelet
	I1124 04:17:31.638859  487285 kubeadm.go:587] duration metric: took 6.853353986s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 04:17:31.638879  487285 node_conditions.go:102] verifying NodePressure condition ...
	I1124 04:17:31.641421  487285 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 04:17:31.641456  487285 node_conditions.go:123] node cpu capacity is 2
	I1124 04:17:31.641469  487285 node_conditions.go:105] duration metric: took 2.583803ms to run NodePressure ...
	I1124 04:17:31.641482  487285 start.go:242] waiting for startup goroutines ...
	I1124 04:17:31.641490  487285 start.go:247] waiting for cluster config update ...
	I1124 04:17:31.641501  487285 start.go:256] writing updated cluster config ...
	I1124 04:17:31.641773  487285 ssh_runner.go:195] Run: rm -f paused
	I1124 04:17:31.646082  487285 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 04:17:31.650050  487285 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bvwhr" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 04:17:33.656353  487285 pod_ready.go:104] pod "coredns-66bc5c9577-bvwhr" is not "Ready", error: <nil>
	W1124 04:17:35.676719  487285 pod_ready.go:104] pod "coredns-66bc5c9577-bvwhr" is not "Ready", error: <nil>
	W1124 04:17:38.156551  487285 pod_ready.go:104] pod "coredns-66bc5c9577-bvwhr" is not "Ready", error: <nil>
	W1124 04:17:40.158712  487285 pod_ready.go:104] pod "coredns-66bc5c9577-bvwhr" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 24 04:17:30 no-preload-600301 crio[663]: time="2025-11-24T04:17:30.537044623Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:17:30 no-preload-600301 crio[663]: time="2025-11-24T04:17:30.546073192Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:17:30 no-preload-600301 crio[663]: time="2025-11-24T04:17:30.546107826Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:17:30 no-preload-600301 crio[663]: time="2025-11-24T04:17:30.546127945Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:17:30 no-preload-600301 crio[663]: time="2025-11-24T04:17:30.553459007Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:17:30 no-preload-600301 crio[663]: time="2025-11-24T04:17:30.553496924Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:17:30 no-preload-600301 crio[663]: time="2025-11-24T04:17:30.553518134Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:17:30 no-preload-600301 crio[663]: time="2025-11-24T04:17:30.562641785Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:17:30 no-preload-600301 crio[663]: time="2025-11-24T04:17:30.562800704Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:17:30 no-preload-600301 crio[663]: time="2025-11-24T04:17:30.562873492Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:17:30 no-preload-600301 crio[663]: time="2025-11-24T04:17:30.568090354Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:17:30 no-preload-600301 crio[663]: time="2025-11-24T04:17:30.568127351Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:17:37 no-preload-600301 crio[663]: time="2025-11-24T04:17:37.578965419Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9095e1e9-3765-4d00-9100-957e56ab2e15 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:17:37 no-preload-600301 crio[663]: time="2025-11-24T04:17:37.580603979Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=be2f6067-b3e7-4734-92c3-609f234aa7aa name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:17:37 no-preload-600301 crio[663]: time="2025-11-24T04:17:37.581610129Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wjlb7/dashboard-metrics-scraper" id=047166df-26df-43f9-ac73-17a482bb0780 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:17:37 no-preload-600301 crio[663]: time="2025-11-24T04:17:37.581705236Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:17:37 no-preload-600301 crio[663]: time="2025-11-24T04:17:37.589024605Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:17:37 no-preload-600301 crio[663]: time="2025-11-24T04:17:37.590029959Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:17:37 no-preload-600301 crio[663]: time="2025-11-24T04:17:37.61903504Z" level=info msg="Created container 7d2caa78e54e80be1a8a82757ad06b9829a7067b9809a00be95c8bb15d19514b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wjlb7/dashboard-metrics-scraper" id=047166df-26df-43f9-ac73-17a482bb0780 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:17:37 no-preload-600301 crio[663]: time="2025-11-24T04:17:37.61999945Z" level=info msg="Starting container: 7d2caa78e54e80be1a8a82757ad06b9829a7067b9809a00be95c8bb15d19514b" id=5879d2a9-bd73-47f3-a51f-bd692a2beda7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:17:37 no-preload-600301 crio[663]: time="2025-11-24T04:17:37.627792121Z" level=info msg="Started container" PID=1749 containerID=7d2caa78e54e80be1a8a82757ad06b9829a7067b9809a00be95c8bb15d19514b description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wjlb7/dashboard-metrics-scraper id=5879d2a9-bd73-47f3-a51f-bd692a2beda7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bca19331db37758574140ef0dec468f2e692158319896de6cc9794f9c57eee5c
	Nov 24 04:17:37 no-preload-600301 conmon[1747]: conmon 7d2caa78e54e80be1a8a <ninfo>: container 1749 exited with status 1
	Nov 24 04:17:37 no-preload-600301 crio[663]: time="2025-11-24T04:17:37.86578921Z" level=info msg="Removing container: 99f8016884ae746c911ab3bcb990ff667a9cdda2f3846eb0fadc7ab56286baf6" id=26e8774a-666e-48c7-94d3-e7425f63f00d name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 04:17:37 no-preload-600301 crio[663]: time="2025-11-24T04:17:37.878951615Z" level=info msg="Error loading conmon cgroup of container 99f8016884ae746c911ab3bcb990ff667a9cdda2f3846eb0fadc7ab56286baf6: cgroup deleted" id=26e8774a-666e-48c7-94d3-e7425f63f00d name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 04:17:37 no-preload-600301 crio[663]: time="2025-11-24T04:17:37.882416807Z" level=info msg="Removed container 99f8016884ae746c911ab3bcb990ff667a9cdda2f3846eb0fadc7ab56286baf6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wjlb7/dashboard-metrics-scraper" id=26e8774a-666e-48c7-94d3-e7425f63f00d name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	7d2caa78e54e8       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago        Exited              dashboard-metrics-scraper   3                   bca19331db377       dashboard-metrics-scraper-6ffb444bf9-wjlb7   kubernetes-dashboard
	77afd0d3ea194       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           25 seconds ago       Running             storage-provisioner         2                   86e889e3e43ec       storage-provisioner                          kube-system
	4a692b478ef7a       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago       Running             kubernetes-dashboard        0                   d367f2a5d62cb       kubernetes-dashboard-855c9754f9-q7r6n        kubernetes-dashboard
	bad663499e3d7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           56 seconds ago       Running             coredns                     1                   5d827da865141       coredns-66bc5c9577-x6vx6                     kube-system
	32d3fa09f2669       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           56 seconds ago       Running             busybox                     1                   b23059496ca5c       busybox                                      default
	03cb1763245b0       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           56 seconds ago       Exited              storage-provisioner         1                   86e889e3e43ec       storage-provisioner                          kube-system
	ccc8adf4a0cd3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           56 seconds ago       Running             kindnet-cni                 1                   90c24029711fe       kindnet-rqpt9                                kube-system
	bf628534f2a9c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           56 seconds ago       Running             kube-proxy                  1                   d38a42ae5d9b4       kube-proxy-bzg2j                             kube-system
	8f632cb5a12f7       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   50d1155acd7ae       kube-controller-manager-no-preload-600301    kube-system
	ed75a78a04580       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   3e506b8589361       kube-scheduler-no-preload-600301             kube-system
	4ac88f9f47aab       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   90eb1665032bd       etcd-no-preload-600301                       kube-system
	55afed455b10e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   9d766c8f32e63       kube-apiserver-no-preload-600301             kube-system
	
	
	==> coredns [bad663499e3d7ea06fa9dd003e9f02d75a1bc8b3d6e129e0346e94af65d0f20f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:49033 - 35610 "HINFO IN 6166004667448281061.8125994888046049801. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016784171s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-600301
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-600301
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=no-preload-600301
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T04_15_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 04:15:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-600301
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 04:17:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 04:17:19 +0000   Mon, 24 Nov 2025 04:15:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 04:17:19 +0000   Mon, 24 Nov 2025 04:15:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 04:17:19 +0000   Mon, 24 Nov 2025 04:15:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 04:17:19 +0000   Mon, 24 Nov 2025 04:16:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-600301
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 304a86241bf1bbb85bd31db5692386d7
	  System UUID:                d1ffab9e-c111-4d9d-8ac8-cb5bfd0ed15c
	  Boot ID:                    e6ca431c-3a35-478f-87f6-f49cc4bc8a65
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-x6vx6                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     116s
	  kube-system                 etcd-no-preload-600301                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m3s
	  kube-system                 kindnet-rqpt9                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      116s
	  kube-system                 kube-apiserver-no-preload-600301              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-no-preload-600301     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-bzg2j                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-scheduler-no-preload-600301              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wjlb7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-q7r6n         0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 114s                   kube-proxy       
	  Normal   Starting                 55s                    kube-proxy       
	  Warning  CgroupV1                 2m11s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m11s (x8 over 2m11s)  kubelet          Node no-preload-600301 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m11s (x8 over 2m11s)  kubelet          Node no-preload-600301 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m11s (x8 over 2m11s)  kubelet          Node no-preload-600301 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m1s                   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m                     kubelet          Node no-preload-600301 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m                     kubelet          Node no-preload-600301 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m                     kubelet          Node no-preload-600301 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m                     kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           117s                   node-controller  Node no-preload-600301 event: Registered Node no-preload-600301 in Controller
	  Normal   NodeReady                99s                    kubelet          Node no-preload-600301 status is now: NodeReady
	  Normal   Starting                 64s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  64s (x8 over 64s)      kubelet          Node no-preload-600301 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s (x8 over 64s)      kubelet          Node no-preload-600301 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s (x8 over 64s)      kubelet          Node no-preload-600301 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           54s                    node-controller  Node no-preload-600301 event: Registered Node no-preload-600301 in Controller
	
	
	==> dmesg <==
	[Nov24 03:54] overlayfs: idmapped layers are currently not supported
	[Nov24 03:55] overlayfs: idmapped layers are currently not supported
	[Nov24 03:56] overlayfs: idmapped layers are currently not supported
	[Nov24 03:57] overlayfs: idmapped layers are currently not supported
	[  +3.077077] overlayfs: idmapped layers are currently not supported
	[Nov24 03:58] overlayfs: idmapped layers are currently not supported
	[ +17.825126] overlayfs: idmapped layers are currently not supported
	[Nov24 03:59] overlayfs: idmapped layers are currently not supported
	[ +28.164324] overlayfs: idmapped layers are currently not supported
	[Nov24 04:01] overlayfs: idmapped layers are currently not supported
	[Nov24 04:02] overlayfs: idmapped layers are currently not supported
	[Nov24 04:04] overlayfs: idmapped layers are currently not supported
	[Nov24 04:05] overlayfs: idmapped layers are currently not supported
	[Nov24 04:06] overlayfs: idmapped layers are currently not supported
	[Nov24 04:08] overlayfs: idmapped layers are currently not supported
	[Nov24 04:10] overlayfs: idmapped layers are currently not supported
	[Nov24 04:11] overlayfs: idmapped layers are currently not supported
	[ +23.918932] overlayfs: idmapped layers are currently not supported
	[Nov24 04:12] overlayfs: idmapped layers are currently not supported
	[ +35.202347] overlayfs: idmapped layers are currently not supported
	[Nov24 04:13] overlayfs: idmapped layers are currently not supported
	[Nov24 04:15] overlayfs: idmapped layers are currently not supported
	[ +47.476343] overlayfs: idmapped layers are currently not supported
	[Nov24 04:16] overlayfs: idmapped layers are currently not supported
	[Nov24 04:17] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4ac88f9f47aab0b24c518b68c22f81e6afea8260839ddedaef751e38026bf9d2] <==
	{"level":"warn","ts":"2025-11-24T04:16:47.299543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.323552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.397601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.429653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.467644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.503616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.542705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.572123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.610132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.646059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.695365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.716016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.758822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.759801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.772198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.790576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.813090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.824152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.841306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.865449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.886404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.912420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.933206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:47.958817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:16:48.094580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40470","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 04:17:46 up  2:59,  0 user,  load average: 4.56, 3.57, 2.91
	Linux no-preload-600301 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ccc8adf4a0cd356a92cfcf643a0bb1acfc023f9b49443edf4d15961bd8be64fa] <==
	I1124 04:16:50.266302       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 04:16:50.315110       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 04:16:50.315381       1 main.go:148] setting mtu 1500 for CNI 
	I1124 04:16:50.315425       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 04:16:50.315452       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T04:16:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 04:16:50.530739       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 04:16:50.530756       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 04:16:50.530765       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 04:16:50.531051       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 04:17:20.530718       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 04:17:20.530802       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 04:17:20.532020       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 04:17:20.532091       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1124 04:17:21.731528       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 04:17:21.731557       1 metrics.go:72] Registering metrics
	I1124 04:17:21.731611       1 controller.go:711] "Syncing nftables rules"
	I1124 04:17:30.531205       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 04:17:30.531290       1 main.go:301] handling current node
	I1124 04:17:40.530873       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 04:17:40.530905       1 main.go:301] handling current node
	
	
	==> kube-apiserver [55afed455b10e0b92f497f4cc207d5f38895ca7082a005ab16f9c05679590e1b] <==
	I1124 04:16:49.148760       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 04:16:49.148898       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 04:16:49.148925       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 04:16:49.148955       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 04:16:49.148968       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 04:16:49.150339       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1124 04:16:49.150391       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 04:16:49.152772       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 04:16:49.152928       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 04:16:49.153959       1 aggregator.go:171] initial CRD sync complete...
	I1124 04:16:49.153989       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 04:16:49.153996       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 04:16:49.154002       1 cache.go:39] Caches are synced for autoregister controller
	I1124 04:16:49.159301       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 04:16:49.516192       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 04:16:49.760136       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 04:16:49.765058       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 04:16:49.842966       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 04:16:49.902912       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 04:16:49.922402       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 04:16:50.143173       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.119.35"}
	I1124 04:16:50.191118       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.211.209"}
	I1124 04:16:52.774621       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 04:16:52.876086       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 04:16:52.989991       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8f632cb5a12f7dae88e3c60421ff0ab241f680a7b63c4f65bb8eb84499a64e5b] <==
	I1124 04:16:52.383354       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 04:16:52.383373       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 04:16:52.383381       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 04:16:52.388755       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 04:16:52.388765       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 04:16:52.392391       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 04:16:52.392856       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 04:16:52.397198       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 04:16:52.398366       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 04:16:52.400637       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 04:16:52.406839       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 04:16:52.409086       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 04:16:52.416500       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 04:16:52.416909       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 04:16:52.417089       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 04:16:52.418038       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 04:16:52.418087       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 04:16:52.418071       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 04:16:52.418056       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 04:16:52.418791       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 04:16:52.419979       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 04:16:52.422690       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 04:16:52.424848       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 04:16:53.012724       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1124 04:16:53.013130       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [bf628534f2a9c982b95075e67d7f92874661a0aeeb8a0f8d1a25a2b637198bcb] <==
	I1124 04:16:50.503717       1 server_linux.go:53] "Using iptables proxy"
	I1124 04:16:50.628728       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 04:16:50.835562       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 04:16:50.835716       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 04:16:50.835835       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 04:16:50.866723       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 04:16:50.867737       1 server_linux.go:132] "Using iptables Proxier"
	I1124 04:16:50.874799       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 04:16:50.875165       1 server.go:527] "Version info" version="v1.34.1"
	I1124 04:16:50.875410       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:16:50.879237       1 config.go:200] "Starting service config controller"
	I1124 04:16:50.879332       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 04:16:50.879381       1 config.go:106] "Starting endpoint slice config controller"
	I1124 04:16:50.879424       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 04:16:50.879461       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 04:16:50.879495       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 04:16:50.880210       1 config.go:309] "Starting node config controller"
	I1124 04:16:50.880271       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 04:16:50.880304       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 04:16:50.982002       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 04:16:50.982163       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 04:16:50.982519       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ed75a78a04580a5d6c612702f42fb21257b2917a5a53cb5bcaa4a18f5382a8d9] <==
	I1124 04:16:46.625421       1 serving.go:386] Generated self-signed cert in-memory
	W1124 04:16:48.968484       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 04:16:48.968529       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 04:16:48.968549       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 04:16:48.968556       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 04:16:49.094014       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 04:16:49.094047       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:16:49.118584       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 04:16:49.118691       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 04:16:49.118709       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 04:16:49.118724       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 04:16:49.219613       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 04:16:53 no-preload-600301 kubelet[785]: W1124 04:16:53.365035     785 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/49ddc9e82ab9d43934cea4430c4bc27f0a6202c57efb037930223131a0eb594c/crio-d367f2a5d62cb9efd4d5d7fe3da4100db57fc4d8141999b3b462e0c052cb1117 WatchSource:0}: Error finding container d367f2a5d62cb9efd4d5d7fe3da4100db57fc4d8141999b3b462e0c052cb1117: Status 404 returned error can't find the container with id d367f2a5d62cb9efd4d5d7fe3da4100db57fc4d8141999b3b462e0c052cb1117
	Nov 24 04:16:56 no-preload-600301 kubelet[785]: I1124 04:16:56.431595     785 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 24 04:16:57 no-preload-600301 kubelet[785]: I1124 04:16:57.715758     785 scope.go:117] "RemoveContainer" containerID="4298790382f8a407ab5b8225d6273618fdfb54479db5d57698d76c3aa1e0705e"
	Nov 24 04:16:58 no-preload-600301 kubelet[785]: I1124 04:16:58.721136     785 scope.go:117] "RemoveContainer" containerID="4298790382f8a407ab5b8225d6273618fdfb54479db5d57698d76c3aa1e0705e"
	Nov 24 04:16:58 no-preload-600301 kubelet[785]: I1124 04:16:58.721408     785 scope.go:117] "RemoveContainer" containerID="a5a908d84a7f81ba233fb77a69e5cb000916cca6cd564a4bbf4df2488e33a5a6"
	Nov 24 04:16:58 no-preload-600301 kubelet[785]: E1124 04:16:58.721547     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wjlb7_kubernetes-dashboard(70e8e9d2-86da-45e4-9b28-5081999bc4df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wjlb7" podUID="70e8e9d2-86da-45e4-9b28-5081999bc4df"
	Nov 24 04:16:59 no-preload-600301 kubelet[785]: I1124 04:16:59.725895     785 scope.go:117] "RemoveContainer" containerID="a5a908d84a7f81ba233fb77a69e5cb000916cca6cd564a4bbf4df2488e33a5a6"
	Nov 24 04:16:59 no-preload-600301 kubelet[785]: E1124 04:16:59.726098     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wjlb7_kubernetes-dashboard(70e8e9d2-86da-45e4-9b28-5081999bc4df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wjlb7" podUID="70e8e9d2-86da-45e4-9b28-5081999bc4df"
	Nov 24 04:17:03 no-preload-600301 kubelet[785]: I1124 04:17:03.291752     785 scope.go:117] "RemoveContainer" containerID="a5a908d84a7f81ba233fb77a69e5cb000916cca6cd564a4bbf4df2488e33a5a6"
	Nov 24 04:17:03 no-preload-600301 kubelet[785]: E1124 04:17:03.291942     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wjlb7_kubernetes-dashboard(70e8e9d2-86da-45e4-9b28-5081999bc4df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wjlb7" podUID="70e8e9d2-86da-45e4-9b28-5081999bc4df"
	Nov 24 04:17:16 no-preload-600301 kubelet[785]: I1124 04:17:16.578242     785 scope.go:117] "RemoveContainer" containerID="a5a908d84a7f81ba233fb77a69e5cb000916cca6cd564a4bbf4df2488e33a5a6"
	Nov 24 04:17:16 no-preload-600301 kubelet[785]: I1124 04:17:16.792000     785 scope.go:117] "RemoveContainer" containerID="a5a908d84a7f81ba233fb77a69e5cb000916cca6cd564a4bbf4df2488e33a5a6"
	Nov 24 04:17:16 no-preload-600301 kubelet[785]: I1124 04:17:16.792792     785 scope.go:117] "RemoveContainer" containerID="99f8016884ae746c911ab3bcb990ff667a9cdda2f3846eb0fadc7ab56286baf6"
	Nov 24 04:17:16 no-preload-600301 kubelet[785]: E1124 04:17:16.793098     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wjlb7_kubernetes-dashboard(70e8e9d2-86da-45e4-9b28-5081999bc4df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wjlb7" podUID="70e8e9d2-86da-45e4-9b28-5081999bc4df"
	Nov 24 04:17:16 no-preload-600301 kubelet[785]: I1124 04:17:16.819684     785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-q7r6n" podStartSLOduration=12.785788899 podStartE2EDuration="24.819665342s" podCreationTimestamp="2025-11-24 04:16:52 +0000 UTC" firstStartedPulling="2025-11-24 04:16:53.374415703 +0000 UTC m=+11.012717534" lastFinishedPulling="2025-11-24 04:17:05.408292154 +0000 UTC m=+23.046593977" observedRunningTime="2025-11-24 04:17:05.798875905 +0000 UTC m=+23.437177826" watchObservedRunningTime="2025-11-24 04:17:16.819665342 +0000 UTC m=+34.457967165"
	Nov 24 04:17:20 no-preload-600301 kubelet[785]: I1124 04:17:20.806124     785 scope.go:117] "RemoveContainer" containerID="03cb1763245b01261c41affe67d5e6fa8faaad06c4e673e16d63aff37e96298d"
	Nov 24 04:17:23 no-preload-600301 kubelet[785]: I1124 04:17:23.291665     785 scope.go:117] "RemoveContainer" containerID="99f8016884ae746c911ab3bcb990ff667a9cdda2f3846eb0fadc7ab56286baf6"
	Nov 24 04:17:23 no-preload-600301 kubelet[785]: E1124 04:17:23.291848     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wjlb7_kubernetes-dashboard(70e8e9d2-86da-45e4-9b28-5081999bc4df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wjlb7" podUID="70e8e9d2-86da-45e4-9b28-5081999bc4df"
	Nov 24 04:17:37 no-preload-600301 kubelet[785]: I1124 04:17:37.577855     785 scope.go:117] "RemoveContainer" containerID="99f8016884ae746c911ab3bcb990ff667a9cdda2f3846eb0fadc7ab56286baf6"
	Nov 24 04:17:37 no-preload-600301 kubelet[785]: I1124 04:17:37.853821     785 scope.go:117] "RemoveContainer" containerID="99f8016884ae746c911ab3bcb990ff667a9cdda2f3846eb0fadc7ab56286baf6"
	Nov 24 04:17:37 no-preload-600301 kubelet[785]: I1124 04:17:37.854335     785 scope.go:117] "RemoveContainer" containerID="7d2caa78e54e80be1a8a82757ad06b9829a7067b9809a00be95c8bb15d19514b"
	Nov 24 04:17:37 no-preload-600301 kubelet[785]: E1124 04:17:37.854684     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wjlb7_kubernetes-dashboard(70e8e9d2-86da-45e4-9b28-5081999bc4df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wjlb7" podUID="70e8e9d2-86da-45e4-9b28-5081999bc4df"
	Nov 24 04:17:41 no-preload-600301 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 04:17:41 no-preload-600301 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 04:17:41 no-preload-600301 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [4a692b478ef7ae43e58f4c41e564623fcf774e3807a46373ba7ce091dea7cfdc] <==
	2025/11/24 04:17:05 Starting overwatch
	2025/11/24 04:17:05 Using namespace: kubernetes-dashboard
	2025/11/24 04:17:05 Using in-cluster config to connect to apiserver
	2025/11/24 04:17:05 Using secret token for csrf signing
	2025/11/24 04:17:05 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 04:17:05 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 04:17:05 Successful initial request to the apiserver, version: v1.34.1
	2025/11/24 04:17:05 Generating JWE encryption key
	2025/11/24 04:17:05 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 04:17:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 04:17:05 Initializing JWE encryption key from synchronized object
	2025/11/24 04:17:05 Creating in-cluster Sidecar client
	2025/11/24 04:17:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 04:17:05 Serving insecurely on HTTP port: 9090
	2025/11/24 04:17:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [03cb1763245b01261c41affe67d5e6fa8faaad06c4e673e16d63aff37e96298d] <==
	I1124 04:16:50.474136       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 04:17:20.476869       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [77afd0d3ea194d2cc191291986d2a24265aa9a172c1bfb35d2a19cd74ae1b0b1] <==
	I1124 04:17:20.867184       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 04:17:20.890563       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 04:17:20.890716       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 04:17:20.893973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:17:24.349227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:17:28.611915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:17:32.210198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:17:35.264232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:17:38.285924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:17:38.292507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 04:17:38.292643       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 04:17:38.292793       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-600301_45fa2777-a0cf-4b1a-9b53-ff6b2b9bbd73!
	I1124 04:17:38.293633       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"158a062c-c5ad-4735-ae08-e89f4d9cb5f4", APIVersion:"v1", ResourceVersion:"683", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-600301_45fa2777-a0cf-4b1a-9b53-ff6b2b9bbd73 became leader
	W1124 04:17:38.302379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:17:38.309528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 04:17:38.393863       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-600301_45fa2777-a0cf-4b1a-9b53-ff6b2b9bbd73!
	W1124 04:17:40.314723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:17:40.329211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:17:42.337674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:17:42.344696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:17:44.348063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:17:44.353712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:17:46.357858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:17:46.369455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-600301 -n no-preload-600301
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-600301 -n no-preload-600301: exit status 2 (372.346067ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-600301 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-520529 --alsologtostderr -v=1
E1124 04:18:24.396970  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/functional-666975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-520529 --alsologtostderr -v=1: exit status 80 (2.185092463s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-520529 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 04:18:23.140789  493179 out.go:360] Setting OutFile to fd 1 ...
	I1124 04:18:23.141026  493179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:18:23.141056  493179 out.go:374] Setting ErrFile to fd 2...
	I1124 04:18:23.141076  493179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:18:23.141407  493179 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 04:18:23.141700  493179 out.go:368] Setting JSON to false
	I1124 04:18:23.141759  493179 mustload.go:66] Loading cluster: embed-certs-520529
	I1124 04:18:23.142248  493179 config.go:182] Loaded profile config "embed-certs-520529": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:18:23.144038  493179 cli_runner.go:164] Run: docker container inspect embed-certs-520529 --format={{.State.Status}}
	I1124 04:18:23.165980  493179 host.go:66] Checking if "embed-certs-520529" exists ...
	I1124 04:18:23.166293  493179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:18:23.264446  493179 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-24 04:18:23.255270577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:18:23.265076  493179 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763935228-21975/minikube-v1.37.0-1763935228-21975-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763935228-21975-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-520529 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 04:18:23.268472  493179 out.go:179] * Pausing node embed-certs-520529 ... 
	I1124 04:18:23.272132  493179 host.go:66] Checking if "embed-certs-520529" exists ...
	I1124 04:18:23.272482  493179 ssh_runner.go:195] Run: systemctl --version
	I1124 04:18:23.272534  493179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-520529
	I1124 04:18:23.290803  493179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/embed-certs-520529/id_rsa Username:docker}
	I1124 04:18:23.396087  493179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:18:23.415021  493179 pause.go:52] kubelet running: true
	I1124 04:18:23.415085  493179 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 04:18:23.690066  493179 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 04:18:23.690150  493179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 04:18:23.785722  493179 cri.go:89] found id: "32ff3b0eef3a48557f3abf7a60a0b3a38e475c5ff365fe65364698c01cc51e5c"
	I1124 04:18:23.785797  493179 cri.go:89] found id: "0558f06299e5c2fe843cf590ec463909b96624be39d7369aaf0d96a1bfd563ac"
	I1124 04:18:23.785816  493179 cri.go:89] found id: "a71adf18e73dd3877d49c754226be539d6ccecca0c8d845a84e7cc52f36eebe7"
	I1124 04:18:23.785840  493179 cri.go:89] found id: "cb80ca0ac5438e0cbc64a217d24df56f63a755bd503a1dfd46fc74505c3a9a6a"
	I1124 04:18:23.785877  493179 cri.go:89] found id: "3201ebcdcd96c85d6ccc8935814a307b94f8cb6caa93667464e51dc85132e068"
	I1124 04:18:23.785901  493179 cri.go:89] found id: "88db140510be739f963482f2996de33b78a17e5b533d83b82a40f234765849dd"
	I1124 04:18:23.785922  493179 cri.go:89] found id: "46b464c3ef546ad426e20a096b6d507622c061f58b20e93dcb5f51f5429e5a56"
	I1124 04:18:23.785960  493179 cri.go:89] found id: "8ecff8f50d3922e20ff3b13ee70cfc72ccd41cc0050330e8fa59fb1fd12b3749"
	I1124 04:18:23.785979  493179 cri.go:89] found id: "dbe92e95274246f2a0d7b1498caff07e467f7316997bfdfb9d6b5eb74f4a8db9"
	I1124 04:18:23.786005  493179 cri.go:89] found id: "13b95fe1883b52b2af09a03014debb9c88264e08051cf4f73c66109c0d914123"
	I1124 04:18:23.786036  493179 cri.go:89] found id: "997f8bac617eec8cefe694cb39fb8f8ea3728aa8ff4e30ca40e239b9ab5d2a8a"
	I1124 04:18:23.786057  493179 cri.go:89] found id: ""
	I1124 04:18:23.786136  493179 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 04:18:23.799725  493179 retry.go:31] will retry after 265.954915ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:18:23Z" level=error msg="open /run/runc: no such file or directory"
	I1124 04:18:24.066296  493179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:18:24.080322  493179 pause.go:52] kubelet running: false
	I1124 04:18:24.080409  493179 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 04:18:24.279650  493179 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 04:18:24.279748  493179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 04:18:24.357231  493179 cri.go:89] found id: "32ff3b0eef3a48557f3abf7a60a0b3a38e475c5ff365fe65364698c01cc51e5c"
	I1124 04:18:24.357294  493179 cri.go:89] found id: "0558f06299e5c2fe843cf590ec463909b96624be39d7369aaf0d96a1bfd563ac"
	I1124 04:18:24.357321  493179 cri.go:89] found id: "a71adf18e73dd3877d49c754226be539d6ccecca0c8d845a84e7cc52f36eebe7"
	I1124 04:18:24.357346  493179 cri.go:89] found id: "cb80ca0ac5438e0cbc64a217d24df56f63a755bd503a1dfd46fc74505c3a9a6a"
	I1124 04:18:24.357367  493179 cri.go:89] found id: "3201ebcdcd96c85d6ccc8935814a307b94f8cb6caa93667464e51dc85132e068"
	I1124 04:18:24.357390  493179 cri.go:89] found id: "88db140510be739f963482f2996de33b78a17e5b533d83b82a40f234765849dd"
	I1124 04:18:24.357411  493179 cri.go:89] found id: "46b464c3ef546ad426e20a096b6d507622c061f58b20e93dcb5f51f5429e5a56"
	I1124 04:18:24.357434  493179 cri.go:89] found id: "8ecff8f50d3922e20ff3b13ee70cfc72ccd41cc0050330e8fa59fb1fd12b3749"
	I1124 04:18:24.357461  493179 cri.go:89] found id: "dbe92e95274246f2a0d7b1498caff07e467f7316997bfdfb9d6b5eb74f4a8db9"
	I1124 04:18:24.357483  493179 cri.go:89] found id: "13b95fe1883b52b2af09a03014debb9c88264e08051cf4f73c66109c0d914123"
	I1124 04:18:24.357504  493179 cri.go:89] found id: "997f8bac617eec8cefe694cb39fb8f8ea3728aa8ff4e30ca40e239b9ab5d2a8a"
	I1124 04:18:24.357524  493179 cri.go:89] found id: ""
	I1124 04:18:24.357611  493179 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 04:18:24.369621  493179 retry.go:31] will retry after 517.22626ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:18:24Z" level=error msg="open /run/runc: no such file or directory"
	I1124 04:18:24.887074  493179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:18:24.901215  493179 pause.go:52] kubelet running: false
	I1124 04:18:24.901289  493179 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 04:18:25.110394  493179 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 04:18:25.110576  493179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 04:18:25.209753  493179 cri.go:89] found id: "32ff3b0eef3a48557f3abf7a60a0b3a38e475c5ff365fe65364698c01cc51e5c"
	I1124 04:18:25.209818  493179 cri.go:89] found id: "0558f06299e5c2fe843cf590ec463909b96624be39d7369aaf0d96a1bfd563ac"
	I1124 04:18:25.209841  493179 cri.go:89] found id: "a71adf18e73dd3877d49c754226be539d6ccecca0c8d845a84e7cc52f36eebe7"
	I1124 04:18:25.209864  493179 cri.go:89] found id: "cb80ca0ac5438e0cbc64a217d24df56f63a755bd503a1dfd46fc74505c3a9a6a"
	I1124 04:18:25.209881  493179 cri.go:89] found id: "3201ebcdcd96c85d6ccc8935814a307b94f8cb6caa93667464e51dc85132e068"
	I1124 04:18:25.209900  493179 cri.go:89] found id: "88db140510be739f963482f2996de33b78a17e5b533d83b82a40f234765849dd"
	I1124 04:18:25.209924  493179 cri.go:89] found id: "46b464c3ef546ad426e20a096b6d507622c061f58b20e93dcb5f51f5429e5a56"
	I1124 04:18:25.209944  493179 cri.go:89] found id: "8ecff8f50d3922e20ff3b13ee70cfc72ccd41cc0050330e8fa59fb1fd12b3749"
	I1124 04:18:25.209976  493179 cri.go:89] found id: "dbe92e95274246f2a0d7b1498caff07e467f7316997bfdfb9d6b5eb74f4a8db9"
	I1124 04:18:25.209998  493179 cri.go:89] found id: "13b95fe1883b52b2af09a03014debb9c88264e08051cf4f73c66109c0d914123"
	I1124 04:18:25.210016  493179 cri.go:89] found id: "997f8bac617eec8cefe694cb39fb8f8ea3728aa8ff4e30ca40e239b9ab5d2a8a"
	I1124 04:18:25.210039  493179 cri.go:89] found id: ""
	I1124 04:18:25.210108  493179 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 04:18:25.230866  493179 out.go:203] 
	W1124 04:18:25.233882  493179 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:18:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:18:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 04:18:25.233942  493179 out.go:285] * 
	* 
	W1124 04:18:25.240152  493179 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 04:18:25.243049  493179 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-520529 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-520529
helpers_test.go:243: (dbg) docker inspect embed-certs-520529:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8a3eb121088a2884f162896a1fbffe937d27ff6bf1c385a43a0e9edc1839c5eb",
	        "Created": "2025-11-24T04:15:31.362300869Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 487412,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T04:17:17.10431829Z",
	            "FinishedAt": "2025-11-24T04:17:16.231804085Z"
	        },
	        "Image": "sha256:fbb44bc62521f331457dff002aaa5e1e27856f9e53853b3b3ee62969be454028",
	        "ResolvConfPath": "/var/lib/docker/containers/8a3eb121088a2884f162896a1fbffe937d27ff6bf1c385a43a0e9edc1839c5eb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8a3eb121088a2884f162896a1fbffe937d27ff6bf1c385a43a0e9edc1839c5eb/hostname",
	        "HostsPath": "/var/lib/docker/containers/8a3eb121088a2884f162896a1fbffe937d27ff6bf1c385a43a0e9edc1839c5eb/hosts",
	        "LogPath": "/var/lib/docker/containers/8a3eb121088a2884f162896a1fbffe937d27ff6bf1c385a43a0e9edc1839c5eb/8a3eb121088a2884f162896a1fbffe937d27ff6bf1c385a43a0e9edc1839c5eb-json.log",
	        "Name": "/embed-certs-520529",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-520529:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-520529",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8a3eb121088a2884f162896a1fbffe937d27ff6bf1c385a43a0e9edc1839c5eb",
	                "LowerDir": "/var/lib/docker/overlay2/802b4ddd893465d41da7d4aef59a4908de4bca3ef59f3154a91d2e1417b23762-init/diff:/var/lib/docker/overlay2/d0aa28be488ed1454a92ae6ce1d851cbc3e880fd8a322d63788b1835a346ec13/diff",
	                "MergedDir": "/var/lib/docker/overlay2/802b4ddd893465d41da7d4aef59a4908de4bca3ef59f3154a91d2e1417b23762/merged",
	                "UpperDir": "/var/lib/docker/overlay2/802b4ddd893465d41da7d4aef59a4908de4bca3ef59f3154a91d2e1417b23762/diff",
	                "WorkDir": "/var/lib/docker/overlay2/802b4ddd893465d41da7d4aef59a4908de4bca3ef59f3154a91d2e1417b23762/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-520529",
	                "Source": "/var/lib/docker/volumes/embed-certs-520529/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-520529",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-520529",
	                "name.minikube.sigs.k8s.io": "embed-certs-520529",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "43f5aa6dc6abb6c5afecb26151274da85a1c075060b8315f72c3ddfb672143f2",
	            "SandboxKey": "/var/run/docker/netns/43f5aa6dc6ab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-520529": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:25:e8:1f:6d:c7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e3e6fa2232739e2881841760b0f4ae6184afdbd9df8a88d4c082b05eeb608469",
	                    "EndpointID": "f1c5299e4182ffb676d65d68ebc9a3818eaad9fe664b76eb4391960344c4f6c3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-520529",
	                        "8a3eb121088a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-520529 -n embed-certs-520529
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-520529 -n embed-certs-520529: exit status 2 (366.359414ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-520529 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-520529 logs -n 25: (1.774782805s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-762702 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-762702       │ jenkins │ v1.37.0 │ 24 Nov 25 04:13 UTC │ 24 Nov 25 04:14 UTC │
	│ image   │ old-k8s-version-762702 image list --format=json                                                                                                                                                                                               │ old-k8s-version-762702       │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:14 UTC │
	│ pause   │ -p old-k8s-version-762702 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-762702       │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │                     │
	│ delete  │ -p old-k8s-version-762702                                                                                                                                                                                                                     │ old-k8s-version-762702       │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:14 UTC │
	│ delete  │ -p old-k8s-version-762702                                                                                                                                                                                                                     │ old-k8s-version-762702       │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:14 UTC │
	│ start   │ -p no-preload-600301 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:16 UTC │
	│ start   │ -p cert-expiration-918798 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-918798       │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:15 UTC │
	│ delete  │ -p cert-expiration-918798                                                                                                                                                                                                                     │ cert-expiration-918798       │ jenkins │ v1.37.0 │ 24 Nov 25 04:15 UTC │ 24 Nov 25 04:15 UTC │
	│ start   │ -p embed-certs-520529 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:15 UTC │ 24 Nov 25 04:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-600301 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │                     │
	│ stop    │ -p no-preload-600301 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │ 24 Nov 25 04:16 UTC │
	│ addons  │ enable dashboard -p no-preload-600301 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │ 24 Nov 25 04:16 UTC │
	│ start   │ -p no-preload-600301 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │ 24 Nov 25 04:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-520529 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │                     │
	│ stop    │ -p embed-certs-520529 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-520529 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ start   │ -p embed-certs-520529 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:18 UTC │
	│ image   │ no-preload-600301 image list --format=json                                                                                                                                                                                                    │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ pause   │ -p no-preload-600301 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │                     │
	│ delete  │ -p no-preload-600301                                                                                                                                                                                                                          │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ delete  │ -p no-preload-600301                                                                                                                                                                                                                          │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ delete  │ -p disable-driver-mounts-995056                                                                                                                                                                                                               │ disable-driver-mounts-995056 │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ start   │ -p default-k8s-diff-port-303179 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-303179 │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │                     │
	│ image   │ embed-certs-520529 image list --format=json                                                                                                                                                                                                   │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │ 24 Nov 25 04:18 UTC │
	│ pause   │ -p embed-certs-520529 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 04:17:50
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 04:17:50.694959  490948 out.go:360] Setting OutFile to fd 1 ...
	I1124 04:17:50.695087  490948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:17:50.695098  490948 out.go:374] Setting ErrFile to fd 2...
	I1124 04:17:50.695103  490948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:17:50.695357  490948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 04:17:50.695804  490948 out.go:368] Setting JSON to false
	I1124 04:17:50.696819  490948 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10800,"bootTime":1763947071,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 04:17:50.696896  490948 start.go:143] virtualization:  
	I1124 04:17:50.700880  490948 out.go:179] * [default-k8s-diff-port-303179] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 04:17:50.705074  490948 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 04:17:50.705146  490948 notify.go:221] Checking for updates...
	I1124 04:17:50.711561  490948 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 04:17:50.714778  490948 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:17:50.717845  490948 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	I1124 04:17:50.720989  490948 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 04:17:50.723973  490948 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 04:17:50.727454  490948 config.go:182] Loaded profile config "embed-certs-520529": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:17:50.727560  490948 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 04:17:50.765188  490948 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 04:17:50.765329  490948 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:17:50.823770  490948 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 04:17:50.814088299 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:17:50.823874  490948 docker.go:319] overlay module found
	I1124 04:17:50.827196  490948 out.go:179] * Using the docker driver based on user configuration
	I1124 04:17:50.830131  490948 start.go:309] selected driver: docker
	I1124 04:17:50.830153  490948 start.go:927] validating driver "docker" against <nil>
	I1124 04:17:50.830169  490948 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 04:17:50.831116  490948 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:17:50.887357  490948 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 04:17:50.878355609 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:17:50.887535  490948 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 04:17:50.887758  490948 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 04:17:50.890749  490948 out.go:179] * Using Docker driver with root privileges
	I1124 04:17:50.893658  490948 cni.go:84] Creating CNI manager for ""
	I1124 04:17:50.893729  490948 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:17:50.893750  490948 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 04:17:50.893837  490948 start.go:353] cluster config:
	{Name:default-k8s-diff-port-303179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:17:50.897053  490948 out.go:179] * Starting "default-k8s-diff-port-303179" primary control-plane node in "default-k8s-diff-port-303179" cluster
	I1124 04:17:50.899848  490948 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 04:17:50.902774  490948 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 04:17:50.905748  490948 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:17:50.905799  490948 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 04:17:50.905827  490948 cache.go:65] Caching tarball of preloaded images
	I1124 04:17:50.905833  490948 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 04:17:50.905912  490948 preload.go:238] Found /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 04:17:50.905922  490948 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 04:17:50.906023  490948 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/config.json ...
	I1124 04:17:50.906043  490948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/config.json: {Name:mke899bf3df2fc5c9ba13e5b10e48424e42ba10f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:17:50.926102  490948 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 04:17:50.926126  490948 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 04:17:50.926147  490948 cache.go:243] Successfully downloaded all kic artifacts
	I1124 04:17:50.926177  490948 start.go:360] acquireMachinesLock for default-k8s-diff-port-303179: {Name:mk876fcea2f12d71199d194b5970210275c2b905 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 04:17:50.926283  490948 start.go:364] duration metric: took 84.563µs to acquireMachinesLock for "default-k8s-diff-port-303179"
	I1124 04:17:50.926314  490948 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-303179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303179 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 04:17:50.926386  490948 start.go:125] createHost starting for "" (driver="docker")
	W1124 04:17:47.156470  487285 pod_ready.go:104] pod "coredns-66bc5c9577-bvwhr" is not "Ready", error: <nil>
	W1124 04:17:49.157131  487285 pod_ready.go:104] pod "coredns-66bc5c9577-bvwhr" is not "Ready", error: <nil>
	W1124 04:17:51.157796  487285 pod_ready.go:104] pod "coredns-66bc5c9577-bvwhr" is not "Ready", error: <nil>
	I1124 04:17:50.929768  490948 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 04:17:50.930010  490948 start.go:159] libmachine.API.Create for "default-k8s-diff-port-303179" (driver="docker")
	I1124 04:17:50.930046  490948 client.go:173] LocalClient.Create starting
	I1124 04:17:50.930160  490948 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem
	I1124 04:17:50.930204  490948 main.go:143] libmachine: Decoding PEM data...
	I1124 04:17:50.930221  490948 main.go:143] libmachine: Parsing certificate...
	I1124 04:17:50.930277  490948 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem
	I1124 04:17:50.930300  490948 main.go:143] libmachine: Decoding PEM data...
	I1124 04:17:50.930312  490948 main.go:143] libmachine: Parsing certificate...
	I1124 04:17:50.930721  490948 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-303179 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 04:17:50.951561  490948 cli_runner.go:211] docker network inspect default-k8s-diff-port-303179 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 04:17:50.951650  490948 network_create.go:284] running [docker network inspect default-k8s-diff-port-303179] to gather additional debugging logs...
	I1124 04:17:50.951673  490948 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-303179
	W1124 04:17:50.968687  490948 cli_runner.go:211] docker network inspect default-k8s-diff-port-303179 returned with exit code 1
	I1124 04:17:50.968778  490948 network_create.go:287] error running [docker network inspect default-k8s-diff-port-303179]: docker network inspect default-k8s-diff-port-303179: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-303179 not found
	I1124 04:17:50.968797  490948 network_create.go:289] output of [docker network inspect default-k8s-diff-port-303179]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-303179 not found
	
	** /stderr **
	I1124 04:17:50.968916  490948 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 04:17:50.987083  490948 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-740fb099fccc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:52:7a:9c:b0:4d:41} reservation:<nil>}
	I1124 04:17:50.987483  490948 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4b0f25a7c590 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:53:b3:a1:55:1a} reservation:<nil>}
	I1124 04:17:50.987741  490948 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c1d995330d2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:83:d9:0c:83:10} reservation:<nil>}
	I1124 04:17:50.988046  490948 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e3e6fa223273 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:be:59:ed:b0:cb:f8} reservation:<nil>}
	I1124 04:17:50.988491  490948 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a54a80}
	I1124 04:17:50.988517  490948 network_create.go:124] attempt to create docker network default-k8s-diff-port-303179 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1124 04:17:50.988668  490948 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-303179 default-k8s-diff-port-303179
	I1124 04:17:51.072462  490948 network_create.go:108] docker network default-k8s-diff-port-303179 192.168.85.0/24 created
	I1124 04:17:51.072497  490948 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-303179" container
	I1124 04:17:51.072588  490948 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 04:17:51.089728  490948 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-303179 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-303179 --label created_by.minikube.sigs.k8s.io=true
	I1124 04:17:51.114616  490948 oci.go:103] Successfully created a docker volume default-k8s-diff-port-303179
	I1124 04:17:51.114714  490948 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-303179-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-303179 --entrypoint /usr/bin/test -v default-k8s-diff-port-303179:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 04:17:51.679383  490948 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-303179
	I1124 04:17:51.679472  490948 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:17:51.679489  490948 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 04:17:51.679560  490948 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-303179:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	W1124 04:17:53.656741  487285 pod_ready.go:104] pod "coredns-66bc5c9577-bvwhr" is not "Ready", error: <nil>
	W1124 04:17:56.156679  487285 pod_ready.go:104] pod "coredns-66bc5c9577-bvwhr" is not "Ready", error: <nil>
	I1124 04:17:56.123329  490948 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-303179:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (4.443721196s)
	I1124 04:17:56.123365  490948 kic.go:203] duration metric: took 4.44387154s to extract preloaded images to volume ...
	W1124 04:17:56.123517  490948 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 04:17:56.123635  490948 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 04:17:56.184270  490948 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-303179 --name default-k8s-diff-port-303179 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-303179 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-303179 --network default-k8s-diff-port-303179 --ip 192.168.85.2 --volume default-k8s-diff-port-303179:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 04:17:56.497586  490948 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303179 --format={{.State.Running}}
	I1124 04:17:56.519383  490948 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303179 --format={{.State.Status}}
	I1124 04:17:56.545782  490948 cli_runner.go:164] Run: docker exec default-k8s-diff-port-303179 stat /var/lib/dpkg/alternatives/iptables
	I1124 04:17:56.602646  490948 oci.go:144] the created container "default-k8s-diff-port-303179" has a running status.
	I1124 04:17:56.602676  490948 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa...
	I1124 04:17:56.838776  490948 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 04:17:56.874113  490948 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303179 --format={{.State.Status}}
	I1124 04:17:56.898295  490948 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 04:17:56.898315  490948 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-303179 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 04:17:56.950176  490948 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303179 --format={{.State.Status}}
	I1124 04:17:56.984819  490948 machine.go:94] provisionDockerMachine start ...
	I1124 04:17:56.984912  490948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:17:57.023465  490948 main.go:143] libmachine: Using SSH client type: native
	I1124 04:17:57.023819  490948 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1124 04:17:57.023834  490948 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 04:17:57.024509  490948 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 04:18:00.396368  490948 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-303179
	
	I1124 04:18:00.396396  490948 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-303179"
	I1124 04:18:00.396478  490948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:18:00.438129  490948 main.go:143] libmachine: Using SSH client type: native
	I1124 04:18:00.438512  490948 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1124 04:18:00.438528  490948 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-303179 && echo "default-k8s-diff-port-303179" | sudo tee /etc/hostname
	I1124 04:18:00.600406  490948 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-303179
	
	I1124 04:18:00.600527  490948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:18:00.619075  490948 main.go:143] libmachine: Using SSH client type: native
	I1124 04:18:00.619398  490948 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1124 04:18:00.619420  490948 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-303179' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-303179/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-303179' | sudo tee -a /etc/hosts; 
				fi
			fi
	W1124 04:17:58.656163  487285 pod_ready.go:104] pod "coredns-66bc5c9577-bvwhr" is not "Ready", error: <nil>
	W1124 04:18:00.659667  487285 pod_ready.go:104] pod "coredns-66bc5c9577-bvwhr" is not "Ready", error: <nil>
	I1124 04:18:00.766592  490948 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 04:18:00.766620  490948 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-289526/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-289526/.minikube}
	I1124 04:18:00.766649  490948 ubuntu.go:190] setting up certificates
	I1124 04:18:00.766660  490948 provision.go:84] configureAuth start
	I1124 04:18:00.766721  490948 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-303179
	I1124 04:18:00.784043  490948 provision.go:143] copyHostCerts
	I1124 04:18:00.784115  490948 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem, removing ...
	I1124 04:18:00.784129  490948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem
	I1124 04:18:00.784209  490948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem (1123 bytes)
	I1124 04:18:00.784312  490948 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem, removing ...
	I1124 04:18:00.784323  490948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem
	I1124 04:18:00.784351  490948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem (1675 bytes)
	I1124 04:18:00.784408  490948 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem, removing ...
	I1124 04:18:00.784419  490948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem
	I1124 04:18:00.784444  490948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem (1082 bytes)
	I1124 04:18:00.784490  490948 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-303179 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-303179 localhost minikube]
	I1124 04:18:01.212935  490948 provision.go:177] copyRemoteCerts
	I1124 04:18:01.213014  490948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 04:18:01.213055  490948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:18:01.233444  490948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa Username:docker}
	I1124 04:18:01.339901  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 04:18:01.358998  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1124 04:18:01.378332  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 04:18:01.405132  490948 provision.go:87] duration metric: took 638.445601ms to configureAuth
	I1124 04:18:01.405177  490948 ubuntu.go:206] setting minikube options for container-runtime
	I1124 04:18:01.405434  490948 config.go:182] Loaded profile config "default-k8s-diff-port-303179": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:18:01.405611  490948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:18:01.434745  490948 main.go:143] libmachine: Using SSH client type: native
	I1124 04:18:01.435192  490948 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1124 04:18:01.435221  490948 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 04:18:01.853859  490948 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 04:18:01.853922  490948 machine.go:97] duration metric: took 4.869076984s to provisionDockerMachine
	I1124 04:18:01.853947  490948 client.go:176] duration metric: took 10.923888086s to LocalClient.Create
	I1124 04:18:01.853977  490948 start.go:167] duration metric: took 10.923967694s to libmachine.API.Create "default-k8s-diff-port-303179"
	I1124 04:18:01.853985  490948 start.go:293] postStartSetup for "default-k8s-diff-port-303179" (driver="docker")
	I1124 04:18:01.853995  490948 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 04:18:01.854063  490948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 04:18:01.854109  490948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:18:01.873098  490948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa Username:docker}
	I1124 04:18:01.978896  490948 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 04:18:01.982403  490948 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 04:18:01.982434  490948 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 04:18:01.982446  490948 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/addons for local assets ...
	I1124 04:18:01.982527  490948 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/files for local assets ...
	I1124 04:18:01.982638  490948 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem -> 2913892.pem in /etc/ssl/certs
	I1124 04:18:01.982743  490948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 04:18:01.991409  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:18:02.040594  490948 start.go:296] duration metric: took 186.59428ms for postStartSetup
	I1124 04:18:02.041048  490948 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-303179
	I1124 04:18:02.064970  490948 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/config.json ...
	I1124 04:18:02.065285  490948 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 04:18:02.065327  490948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:18:02.084658  490948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa Username:docker}
	I1124 04:18:02.191557  490948 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 04:18:02.196267  490948 start.go:128] duration metric: took 11.26986634s to createHost
	I1124 04:18:02.196292  490948 start.go:83] releasing machines lock for "default-k8s-diff-port-303179", held for 11.269995942s
	I1124 04:18:02.196410  490948 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-303179
	I1124 04:18:02.213474  490948 ssh_runner.go:195] Run: cat /version.json
	I1124 04:18:02.213532  490948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:18:02.213546  490948 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 04:18:02.213601  490948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:18:02.239773  490948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa Username:docker}
	I1124 04:18:02.256553  490948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa Username:docker}
	I1124 04:18:02.441276  490948 ssh_runner.go:195] Run: systemctl --version
	I1124 04:18:02.448781  490948 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 04:18:02.492615  490948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 04:18:02.497062  490948 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 04:18:02.497194  490948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 04:18:02.527097  490948 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 04:18:02.527118  490948 start.go:496] detecting cgroup driver to use...
	I1124 04:18:02.527151  490948 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 04:18:02.527201  490948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 04:18:02.545934  490948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 04:18:02.559870  490948 docker.go:218] disabling cri-docker service (if available) ...
	I1124 04:18:02.559983  490948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 04:18:02.578714  490948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 04:18:02.601235  490948 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 04:18:02.748994  490948 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 04:18:02.884388  490948 docker.go:234] disabling docker service ...
	I1124 04:18:02.884517  490948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 04:18:02.906075  490948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 04:18:02.924129  490948 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 04:18:03.047696  490948 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 04:18:03.174469  490948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 04:18:03.187843  490948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 04:18:03.203545  490948 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 04:18:03.203615  490948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:18:03.212547  490948 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 04:18:03.212692  490948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:18:03.221891  490948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:18:03.231279  490948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:18:03.240546  490948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 04:18:03.248760  490948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:18:03.257558  490948 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:18:03.271830  490948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:18:03.280639  490948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 04:18:03.288525  490948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 04:18:03.295952  490948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:18:03.418756  490948 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 04:18:03.590568  490948 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 04:18:03.590660  490948 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 04:18:03.594598  490948 start.go:564] Will wait 60s for crictl version
	I1124 04:18:03.594691  490948 ssh_runner.go:195] Run: which crictl
	I1124 04:18:03.598163  490948 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 04:18:03.623147  490948 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 04:18:03.623258  490948 ssh_runner.go:195] Run: crio --version
	I1124 04:18:03.659678  490948 ssh_runner.go:195] Run: crio --version
	I1124 04:18:03.700550  490948 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 04:18:03.703438  490948 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-303179 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 04:18:03.720282  490948 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 04:18:03.724485  490948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:18:03.734430  490948 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-303179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303179 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 04:18:03.734629  490948 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:18:03.734694  490948 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:18:03.773107  490948 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:18:03.773134  490948 crio.go:433] Images already preloaded, skipping extraction
	I1124 04:18:03.773187  490948 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:18:03.799519  490948 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:18:03.799544  490948 cache_images.go:86] Images are preloaded, skipping loading
	I1124 04:18:03.799552  490948 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1124 04:18:03.799642  490948 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-303179 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 04:18:03.799722  490948 ssh_runner.go:195] Run: crio config
	I1124 04:18:03.864437  490948 cni.go:84] Creating CNI manager for ""
	I1124 04:18:03.864463  490948 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:18:03.864480  490948 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 04:18:03.864602  490948 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-303179 NodeName:default-k8s-diff-port-303179 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 04:18:03.864777  490948 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-303179"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 04:18:03.864855  490948 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 04:18:03.873154  490948 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 04:18:03.873281  490948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 04:18:03.881221  490948 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1124 04:18:03.895053  490948 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 04:18:03.908149  490948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1124 04:18:03.920909  490948 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 04:18:03.924490  490948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:18:03.934755  490948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:18:04.052655  490948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:18:04.077352  490948 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179 for IP: 192.168.85.2
	I1124 04:18:04.077415  490948 certs.go:195] generating shared ca certs ...
	I1124 04:18:04.077448  490948 certs.go:227] acquiring lock for ca certs: {Name:mk13d1a72b5c7901cf99d88e558f62c6b8512807 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:18:04.077609  490948 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key
	I1124 04:18:04.077683  490948 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key
	I1124 04:18:04.077708  490948 certs.go:257] generating profile certs ...
	I1124 04:18:04.077784  490948 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/client.key
	I1124 04:18:04.077814  490948 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/client.crt with IP's: []
	I1124 04:18:04.195277  490948 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/client.crt ...
	I1124 04:18:04.195309  490948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/client.crt: {Name:mk36afb6ae7e610b32c198c0358f47b75f2fb0e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:18:04.195509  490948 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/client.key ...
	I1124 04:18:04.195526  490948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/client.key: {Name:mkcd9c70b7eacc96e3029affa66d338ba32ec593 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:18:04.195625  490948 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.key.0cae04f4
	I1124 04:18:04.195643  490948 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.crt.0cae04f4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 04:18:04.594712  490948 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.crt.0cae04f4 ...
	I1124 04:18:04.594745  490948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.crt.0cae04f4: {Name:mkfe7b719c01bb3e8edd08cd2a3055a6edfa7b3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:18:04.594946  490948 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.key.0cae04f4 ...
	I1124 04:18:04.594962  490948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.key.0cae04f4: {Name:mk54f2bd8adc5d6cce414d204881c5560ea31ef3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:18:04.595049  490948 certs.go:382] copying /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.crt.0cae04f4 -> /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.crt
	I1124 04:18:04.595139  490948 certs.go:386] copying /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.key.0cae04f4 -> /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.key
	I1124 04:18:04.595204  490948 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/proxy-client.key
	I1124 04:18:04.595222  490948 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/proxy-client.crt with IP's: []
	I1124 04:18:04.718777  490948 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/proxy-client.crt ...
	I1124 04:18:04.718815  490948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/proxy-client.crt: {Name:mk4e9537ed1729474e7e143c6f815468d69786cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:18:04.718987  490948 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/proxy-client.key ...
	I1124 04:18:04.719001  490948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/proxy-client.key: {Name:mk010733f937fce97398c8edc10774eb9ccf13bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:18:04.719202  490948 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem (1338 bytes)
	W1124 04:18:04.719250  490948 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389_empty.pem, impossibly tiny 0 bytes
	I1124 04:18:04.719264  490948 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 04:18:04.719292  490948 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem (1082 bytes)
	I1124 04:18:04.719322  490948 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem (1123 bytes)
	I1124 04:18:04.719350  490948 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem (1675 bytes)
	I1124 04:18:04.719402  490948 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:18:04.720068  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 04:18:04.739300  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 04:18:04.757875  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 04:18:04.776252  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 04:18:04.794316  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 04:18:04.812782  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 04:18:04.830576  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 04:18:04.848757  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 04:18:04.867213  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem --> /usr/share/ca-certificates/291389.pem (1338 bytes)
	I1124 04:18:04.890200  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /usr/share/ca-certificates/2913892.pem (1708 bytes)
	I1124 04:18:04.919345  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 04:18:04.941488  490948 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 04:18:04.955441  490948 ssh_runner.go:195] Run: openssl version
	I1124 04:18:04.962367  490948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 04:18:04.970643  490948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:18:04.974359  490948 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 03:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:18:04.974509  490948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:18:05.017311  490948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 04:18:05.026443  490948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291389.pem && ln -fs /usr/share/ca-certificates/291389.pem /etc/ssl/certs/291389.pem"
	I1124 04:18:05.035426  490948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291389.pem
	I1124 04:18:05.039491  490948 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 03:20 /usr/share/ca-certificates/291389.pem
	I1124 04:18:05.039615  490948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291389.pem
	I1124 04:18:05.087018  490948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291389.pem /etc/ssl/certs/51391683.0"
	I1124 04:18:05.096162  490948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2913892.pem && ln -fs /usr/share/ca-certificates/2913892.pem /etc/ssl/certs/2913892.pem"
	I1124 04:18:05.105567  490948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2913892.pem
	I1124 04:18:05.109969  490948 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 03:20 /usr/share/ca-certificates/2913892.pem
	I1124 04:18:05.110066  490948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2913892.pem
	I1124 04:18:05.152177  490948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2913892.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 04:18:05.162100  490948 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 04:18:05.165887  490948 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 04:18:05.165943  490948 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-303179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303179 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:18:05.166017  490948 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 04:18:05.166074  490948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 04:18:05.194311  490948 cri.go:89] found id: ""
	I1124 04:18:05.194387  490948 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 04:18:05.202508  490948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 04:18:05.211081  490948 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 04:18:05.211243  490948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 04:18:05.219461  490948 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 04:18:05.219490  490948 kubeadm.go:158] found existing configuration files:
	
	I1124 04:18:05.219551  490948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1124 04:18:05.227300  490948 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 04:18:05.227366  490948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 04:18:05.235282  490948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1124 04:18:05.243173  490948 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 04:18:05.243252  490948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 04:18:05.250527  490948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1124 04:18:05.258224  490948 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 04:18:05.258337  490948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 04:18:05.265796  490948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1124 04:18:05.273481  490948 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 04:18:05.273559  490948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 04:18:05.281228  490948 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 04:18:05.321199  490948 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 04:18:05.321480  490948 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 04:18:05.345618  490948 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 04:18:05.345734  490948 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 04:18:05.345801  490948 kubeadm.go:319] OS: Linux
	I1124 04:18:05.345906  490948 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 04:18:05.345991  490948 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 04:18:05.346059  490948 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 04:18:05.346113  490948 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 04:18:05.346179  490948 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 04:18:05.346261  490948 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 04:18:05.346324  490948 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 04:18:05.346374  490948 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 04:18:05.346420  490948 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 04:18:05.424123  490948 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 04:18:05.424323  490948 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 04:18:05.424460  490948 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 04:18:05.434871  490948 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 04:18:05.441714  490948 out.go:252]   - Generating certificates and keys ...
	I1124 04:18:05.441861  490948 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 04:18:05.441947  490948 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	W1124 04:18:03.157589  487285 pod_ready.go:104] pod "coredns-66bc5c9577-bvwhr" is not "Ready", error: <nil>
	W1124 04:18:05.157675  487285 pod_ready.go:104] pod "coredns-66bc5c9577-bvwhr" is not "Ready", error: <nil>
	I1124 04:18:05.713614  490948 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 04:18:06.976046  490948 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 04:18:07.336487  490948 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 04:18:08.257018  490948 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 04:18:08.373979  490948 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 04:18:08.374366  490948 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-303179 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 04:18:08.696003  490948 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 04:18:08.696335  490948 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-303179 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 04:18:10.079153  490948 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 04:18:10.598352  490948 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	W1124 04:18:07.657455  487285 pod_ready.go:104] pod "coredns-66bc5c9577-bvwhr" is not "Ready", error: <nil>
	I1124 04:18:09.657711  487285 pod_ready.go:94] pod "coredns-66bc5c9577-bvwhr" is "Ready"
	I1124 04:18:09.657736  487285 pod_ready.go:86] duration metric: took 38.007659988s for pod "coredns-66bc5c9577-bvwhr" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:18:09.661332  487285 pod_ready.go:83] waiting for pod "etcd-embed-certs-520529" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:18:09.667255  487285 pod_ready.go:94] pod "etcd-embed-certs-520529" is "Ready"
	I1124 04:18:09.667282  487285 pod_ready.go:86] duration metric: took 5.922887ms for pod "etcd-embed-certs-520529" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:18:09.670599  487285 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-520529" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:18:09.676234  487285 pod_ready.go:94] pod "kube-apiserver-embed-certs-520529" is "Ready"
	I1124 04:18:09.676259  487285 pod_ready.go:86] duration metric: took 5.637731ms for pod "kube-apiserver-embed-certs-520529" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:18:09.679245  487285 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-520529" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:18:09.855508  487285 pod_ready.go:94] pod "kube-controller-manager-embed-certs-520529" is "Ready"
	I1124 04:18:09.855601  487285 pod_ready.go:86] duration metric: took 176.277847ms for pod "kube-controller-manager-embed-certs-520529" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:18:10.055357  487285 pod_ready.go:83] waiting for pod "kube-proxy-dt4th" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:18:10.459884  487285 pod_ready.go:94] pod "kube-proxy-dt4th" is "Ready"
	I1124 04:18:10.459906  487285 pod_ready.go:86] duration metric: took 404.472331ms for pod "kube-proxy-dt4th" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:18:10.654861  487285 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-520529" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:18:11.056775  487285 pod_ready.go:94] pod "kube-scheduler-embed-certs-520529" is "Ready"
	I1124 04:18:11.056805  487285 pod_ready.go:86] duration metric: took 401.921252ms for pod "kube-scheduler-embed-certs-520529" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:18:11.056820  487285 pod_ready.go:40] duration metric: took 39.410703509s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 04:18:11.147150  487285 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 04:18:11.150432  487285 out.go:179] * Done! kubectl is now configured to use "embed-certs-520529" cluster and "default" namespace by default
	I1124 04:18:11.119798  490948 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 04:18:11.120614  490948 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 04:18:11.457142  490948 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 04:18:11.987946  490948 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 04:18:12.203712  490948 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 04:18:12.363411  490948 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 04:18:12.722871  490948 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 04:18:12.723496  490948 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 04:18:12.726526  490948 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 04:18:12.730302  490948 out.go:252]   - Booting up control plane ...
	I1124 04:18:12.730429  490948 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 04:18:12.730576  490948 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 04:18:12.732554  490948 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 04:18:12.748631  490948 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 04:18:12.748975  490948 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 04:18:12.757548  490948 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 04:18:12.757926  490948 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 04:18:12.758139  490948 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 04:18:12.900630  490948 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 04:18:12.900756  490948 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 04:18:13.902906  490948 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002325882s
	I1124 04:18:13.907840  490948 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 04:18:13.907937  490948 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1124 04:18:13.908225  490948 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 04:18:13.908320  490948 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 04:18:18.510187  490948 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.601774026s
	I1124 04:18:19.054973  490948 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.147039664s
	I1124 04:18:20.910765  490948 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.002816757s
	I1124 04:18:20.934561  490948 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 04:18:20.968729  490948 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 04:18:20.987520  490948 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 04:18:20.987736  490948 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-303179 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 04:18:21.006778  490948 kubeadm.go:319] [bootstrap-token] Using token: 3da3my.so862l6ukbwktov0
	I1124 04:18:21.009580  490948 out.go:252]   - Configuring RBAC rules ...
	I1124 04:18:21.009710  490948 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 04:18:21.015114  490948 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 04:18:21.026239  490948 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 04:18:21.033780  490948 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 04:18:21.038963  490948 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 04:18:21.043709  490948 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 04:18:21.319249  490948 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 04:18:21.760875  490948 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 04:18:22.319509  490948 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 04:18:22.320685  490948 kubeadm.go:319] 
	I1124 04:18:22.320766  490948 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 04:18:22.320779  490948 kubeadm.go:319] 
	I1124 04:18:22.320864  490948 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 04:18:22.320875  490948 kubeadm.go:319] 
	I1124 04:18:22.320900  490948 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 04:18:22.320964  490948 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 04:18:22.321025  490948 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 04:18:22.321039  490948 kubeadm.go:319] 
	I1124 04:18:22.321093  490948 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 04:18:22.321102  490948 kubeadm.go:319] 
	I1124 04:18:22.321158  490948 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 04:18:22.321164  490948 kubeadm.go:319] 
	I1124 04:18:22.321215  490948 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 04:18:22.321296  490948 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 04:18:22.321369  490948 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 04:18:22.321377  490948 kubeadm.go:319] 
	I1124 04:18:22.321461  490948 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 04:18:22.321543  490948 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 04:18:22.321552  490948 kubeadm.go:319] 
	I1124 04:18:22.321642  490948 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 3da3my.so862l6ukbwktov0 \
	I1124 04:18:22.321752  490948 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2194168f8292dc0469a2699229b86460c4906ab7e633f8753935c94ddff37855 \
	I1124 04:18:22.321779  490948 kubeadm.go:319] 	--control-plane 
	I1124 04:18:22.321784  490948 kubeadm.go:319] 
	I1124 04:18:22.321868  490948 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 04:18:22.321875  490948 kubeadm.go:319] 
	I1124 04:18:22.321978  490948 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 3da3my.so862l6ukbwktov0 \
	I1124 04:18:22.322085  490948 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2194168f8292dc0469a2699229b86460c4906ab7e633f8753935c94ddff37855 
	I1124 04:18:22.326923  490948 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 04:18:22.327143  490948 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 04:18:22.327248  490948 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 04:18:22.327268  490948 cni.go:84] Creating CNI manager for ""
	I1124 04:18:22.327275  490948 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:18:22.330432  490948 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 04:18:22.333486  490948 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 04:18:22.337894  490948 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 04:18:22.337915  490948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 04:18:22.355430  490948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 04:18:22.679211  490948 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 04:18:22.679330  490948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:18:22.679405  490948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-303179 minikube.k8s.io/updated_at=2025_11_24T04_18_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=default-k8s-diff-port-303179 minikube.k8s.io/primary=true
	I1124 04:18:23.124999  490948 ops.go:34] apiserver oom_adj: -16
	I1124 04:18:23.125101  490948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:18:23.625157  490948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:18:24.125761  490948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:18:24.625276  490948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:18:25.125242  490948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:18:25.625157  490948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Nov 24 04:18:07 embed-certs-520529 crio[663]: time="2025-11-24T04:18:07.233523809Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:18:07 embed-certs-520529 crio[663]: time="2025-11-24T04:18:07.245295377Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:18:07 embed-certs-520529 crio[663]: time="2025-11-24T04:18:07.246397946Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:18:07 embed-certs-520529 crio[663]: time="2025-11-24T04:18:07.28729789Z" level=info msg="Created container 13b95fe1883b52b2af09a03014debb9c88264e08051cf4f73c66109c0d914123: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rckpj/dashboard-metrics-scraper" id=6adf30cf-cb04-46a4-a0c9-0b0da694678c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:18:07 embed-certs-520529 crio[663]: time="2025-11-24T04:18:07.294614617Z" level=info msg="Starting container: 13b95fe1883b52b2af09a03014debb9c88264e08051cf4f73c66109c0d914123" id=ef00a7ab-532c-4eda-be8f-fe79d9c4c981 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:18:07 embed-certs-520529 crio[663]: time="2025-11-24T04:18:07.299925886Z" level=info msg="Started container" PID=1689 containerID=13b95fe1883b52b2af09a03014debb9c88264e08051cf4f73c66109c0d914123 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rckpj/dashboard-metrics-scraper id=ef00a7ab-532c-4eda-be8f-fe79d9c4c981 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ee17000730b0978eb3ce03dc51b839d5cc96e0553b54b4361b715e16cfb5d392
	Nov 24 04:18:07 embed-certs-520529 conmon[1687]: conmon 13b95fe1883b52b2af09 <ninfo>: container 1689 exited with status 1
	Nov 24 04:18:07 embed-certs-520529 crio[663]: time="2025-11-24T04:18:07.417264306Z" level=info msg="Removing container: 1b62b13a80b00219150218cf41a1b4d2a27a862fe743aa65e032a1608c19f5ac" id=981daa3f-3852-4a63-ba52-851b8c2bc8fd name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 04:18:07 embed-certs-520529 crio[663]: time="2025-11-24T04:18:07.432024694Z" level=info msg="Error loading conmon cgroup of container 1b62b13a80b00219150218cf41a1b4d2a27a862fe743aa65e032a1608c19f5ac: cgroup deleted" id=981daa3f-3852-4a63-ba52-851b8c2bc8fd name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 04:18:07 embed-certs-520529 crio[663]: time="2025-11-24T04:18:07.438309022Z" level=info msg="Removed container 1b62b13a80b00219150218cf41a1b4d2a27a862fe743aa65e032a1608c19f5ac: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rckpj/dashboard-metrics-scraper" id=981daa3f-3852-4a63-ba52-851b8c2bc8fd name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.045675382Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.063468825Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.063653688Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.063754145Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.090907966Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.090942132Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.090962465Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.114619881Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.114654261Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.114670614Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.125120959Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.125157636Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.125179823Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.144888317Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.144933643Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	13b95fe1883b5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago       Exited              dashboard-metrics-scraper   2                   ee17000730b09       dashboard-metrics-scraper-6ffb444bf9-rckpj   kubernetes-dashboard
	32ff3b0eef3a4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           25 seconds ago       Running             storage-provisioner         2                   cc593f28ee306       storage-provisioner                          kube-system
	997f8bac617ee       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   47 seconds ago       Running             kubernetes-dashboard        0                   2836bb8d9da84       kubernetes-dashboard-855c9754f9-ddq4w        kubernetes-dashboard
	13bbdb8cd12e6       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   89bfabb6f236b       busybox                                      default
	0558f06299e5c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   943cf1d4ec820       coredns-66bc5c9577-bvwhr                     kube-system
	a71adf18e73dd       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   f00b57316239d       kube-proxy-dt4th                             kube-system
	cb80ca0ac5438       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   d4f7b8158c9a1       kindnet-tkncp                                kube-system
	3201ebcdcd96c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   cc593f28ee306       storage-provisioner                          kube-system
	88db140510be7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   8f148e20780aa       kube-apiserver-embed-certs-520529            kube-system
	46b464c3ef546       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   8ce6db1e7dfca       etcd-embed-certs-520529                      kube-system
	8ecff8f50d392       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   b7fc262b95ac6       kube-controller-manager-embed-certs-520529   kube-system
	dbe92e9527424       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   b31db0f2a32d6       kube-scheduler-embed-certs-520529            kube-system
	
	
	==> coredns [0558f06299e5c2fe843cf590ec463909b96624be39d7369aaf0d96a1bfd563ac] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35852 - 64584 "HINFO IN 7236708362425433878.2655873434030849362. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004835178s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-520529
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-520529
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=embed-certs-520529
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T04_16_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 04:15:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-520529
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 04:18:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 04:18:00 +0000   Mon, 24 Nov 2025 04:15:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 04:18:00 +0000   Mon, 24 Nov 2025 04:15:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 04:18:00 +0000   Mon, 24 Nov 2025 04:15:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 04:18:00 +0000   Mon, 24 Nov 2025 04:16:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-520529
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 304a86241bf1bbb85bd31db5692386d7
	  System UUID:                cb05b9d1-526c-48cf-b8c9-27f04aa8373b
	  Boot ID:                    e6ca431c-3a35-478f-87f6-f49cc4bc8a65
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-bvwhr                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m20s
	  kube-system                 etcd-embed-certs-520529                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m25s
	  kube-system                 kindnet-tkncp                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m21s
	  kube-system                 kube-apiserver-embed-certs-520529             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-controller-manager-embed-certs-520529    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-proxy-dt4th                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-scheduler-embed-certs-520529             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-rckpj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ddq4w         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m18s                  kube-proxy       
	  Normal   Starting                 55s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m35s (x8 over 2m35s)  kubelet          Node embed-certs-520529 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m35s (x8 over 2m35s)  kubelet          Node embed-certs-520529 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m35s (x8 over 2m35s)  kubelet          Node embed-certs-520529 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m25s                  kubelet          Node embed-certs-520529 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m25s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m25s                  kubelet          Node embed-certs-520529 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m25s                  kubelet          Node embed-certs-520529 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m25s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m21s                  node-controller  Node embed-certs-520529 event: Registered Node embed-certs-520529 in Controller
	  Normal   NodeReady                100s                   kubelet          Node embed-certs-520529 status is now: NodeReady
	  Normal   Starting                 62s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 62s)      kubelet          Node embed-certs-520529 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 62s)      kubelet          Node embed-certs-520529 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 62s)      kubelet          Node embed-certs-520529 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           54s                    node-controller  Node embed-certs-520529 event: Registered Node embed-certs-520529 in Controller
	
	
	==> dmesg <==
	[Nov24 03:55] overlayfs: idmapped layers are currently not supported
	[Nov24 03:56] overlayfs: idmapped layers are currently not supported
	[Nov24 03:57] overlayfs: idmapped layers are currently not supported
	[  +3.077077] overlayfs: idmapped layers are currently not supported
	[Nov24 03:58] overlayfs: idmapped layers are currently not supported
	[ +17.825126] overlayfs: idmapped layers are currently not supported
	[Nov24 03:59] overlayfs: idmapped layers are currently not supported
	[ +28.164324] overlayfs: idmapped layers are currently not supported
	[Nov24 04:01] overlayfs: idmapped layers are currently not supported
	[Nov24 04:02] overlayfs: idmapped layers are currently not supported
	[Nov24 04:04] overlayfs: idmapped layers are currently not supported
	[Nov24 04:05] overlayfs: idmapped layers are currently not supported
	[Nov24 04:06] overlayfs: idmapped layers are currently not supported
	[Nov24 04:08] overlayfs: idmapped layers are currently not supported
	[Nov24 04:10] overlayfs: idmapped layers are currently not supported
	[Nov24 04:11] overlayfs: idmapped layers are currently not supported
	[ +23.918932] overlayfs: idmapped layers are currently not supported
	[Nov24 04:12] overlayfs: idmapped layers are currently not supported
	[ +35.202347] overlayfs: idmapped layers are currently not supported
	[Nov24 04:13] overlayfs: idmapped layers are currently not supported
	[Nov24 04:15] overlayfs: idmapped layers are currently not supported
	[ +47.476343] overlayfs: idmapped layers are currently not supported
	[Nov24 04:16] overlayfs: idmapped layers are currently not supported
	[Nov24 04:17] overlayfs: idmapped layers are currently not supported
	[Nov24 04:18] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [46b464c3ef546ad426e20a096b6d507622c061f58b20e93dcb5f51f5429e5a56] <==
	{"level":"warn","ts":"2025-11-24T04:17:27.286669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.307008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.332749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.363380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.406041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.427003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.451548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.463133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.480657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.523064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.551458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.575460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.580504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.604862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.619941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.640330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.662603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.712713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.731271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.742880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.758628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.801910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.812013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.832364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.912873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54176","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 04:18:26 up  3:00,  0 user,  load average: 3.16, 3.33, 2.85
	Linux embed-certs-520529 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cb80ca0ac5438e0cbc64a217d24df56f63a755bd503a1dfd46fc74505c3a9a6a] <==
	I1124 04:17:30.860895       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 04:17:30.862717       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 04:17:30.862913       1 main.go:148] setting mtu 1500 for CNI 
	I1124 04:17:30.862956       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 04:17:30.863005       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T04:17:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 04:17:31.115020       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 04:17:31.115128       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 04:17:31.115173       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 04:17:31.115842       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 04:18:01.116827       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 04:18:01.116948       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 04:18:01.117038       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 04:18:01.117118       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1124 04:18:02.716377       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 04:18:02.716509       1 metrics.go:72] Registering metrics
	I1124 04:18:02.716686       1 controller.go:711] "Syncing nftables rules"
	I1124 04:18:11.045253       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 04:18:11.045428       1 main.go:301] handling current node
	I1124 04:18:21.049679       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 04:18:21.049715       1 main.go:301] handling current node
	
	
	==> kube-apiserver [88db140510be739f963482f2996de33b78a17e5b533d83b82a40f234765849dd] <==
	I1124 04:17:29.574694       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 04:17:29.574998       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 04:17:29.575011       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 04:17:29.575186       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 04:17:29.575626       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 04:17:29.576396       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 04:17:29.580431       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 04:17:29.588434       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 04:17:29.588518       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 04:17:29.596312       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 04:17:29.610692       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 04:17:29.621389       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1124 04:17:29.622004       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 04:17:29.666627       1 cache.go:39] Caches are synced for autoregister controller
	I1124 04:17:30.086505       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 04:17:30.191352       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 04:17:30.358978       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 04:17:30.615985       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 04:17:30.800293       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 04:17:30.852881       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 04:17:31.036934       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.87.50"}
	I1124 04:17:31.091937       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.170.17"}
	I1124 04:17:32.861822       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 04:17:32.981785       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 04:17:33.329768       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [8ecff8f50d3922e20ff3b13ee70cfc72ccd41cc0050330e8fa59fb1fd12b3749] <==
	I1124 04:17:32.862240       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 04:17:32.862353       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 04:17:32.862445       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-520529"
	I1124 04:17:32.862541       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 04:17:32.864671       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 04:17:32.865905       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 04:17:32.869182       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 04:17:32.871840       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 04:17:32.872283       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 04:17:32.874666       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 04:17:32.874983       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 04:17:32.875042       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 04:17:32.875080       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 04:17:32.875104       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 04:17:32.877140       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 04:17:32.879392       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 04:17:32.889752       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 04:17:32.889907       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 04:17:32.894565       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 04:17:32.903241       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 04:17:32.903271       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 04:17:32.903280       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 04:17:32.911615       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 04:17:32.915952       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 04:17:32.937991       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [a71adf18e73dd3877d49c754226be539d6ccecca0c8d845a84e7cc52f36eebe7] <==
	I1124 04:17:31.286552       1 server_linux.go:53] "Using iptables proxy"
	I1124 04:17:31.481315       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 04:17:31.582509       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 04:17:31.582624       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 04:17:31.582784       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 04:17:31.680504       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 04:17:31.680624       1 server_linux.go:132] "Using iptables Proxier"
	I1124 04:17:31.684504       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 04:17:31.684942       1 server.go:527] "Version info" version="v1.34.1"
	I1124 04:17:31.685220       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:17:31.686780       1 config.go:200] "Starting service config controller"
	I1124 04:17:31.686834       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 04:17:31.686883       1 config.go:106] "Starting endpoint slice config controller"
	I1124 04:17:31.686909       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 04:17:31.686953       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 04:17:31.686979       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 04:17:31.687658       1 config.go:309] "Starting node config controller"
	I1124 04:17:31.690081       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 04:17:31.690140       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 04:17:31.787166       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 04:17:31.787288       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 04:17:31.787321       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [dbe92e95274246f2a0d7b1498caff07e467f7316997bfdfb9d6b5eb74f4a8db9] <==
	I1124 04:17:31.430095       1 serving.go:386] Generated self-signed cert in-memory
	I1124 04:17:32.862998       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 04:17:32.863034       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:17:32.872428       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 04:17:32.872520       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 04:17:32.872546       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 04:17:32.872594       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 04:17:32.876106       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 04:17:32.876133       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 04:17:32.876153       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 04:17:32.876159       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 04:17:32.972637       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1124 04:17:32.976786       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 04:17:32.976900       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 04:17:33 embed-certs-520529 kubelet[793]: I1124 04:17:33.514075     793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq2df\" (UniqueName: \"kubernetes.io/projected/532d8426-c95e-41c5-9b89-a994820a332b-kube-api-access-gq2df\") pod \"kubernetes-dashboard-855c9754f9-ddq4w\" (UID: \"532d8426-c95e-41c5-9b89-a994820a332b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ddq4w"
	Nov 24 04:17:33 embed-certs-520529 kubelet[793]: I1124 04:17:33.514114     793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prjqn\" (UniqueName: \"kubernetes.io/projected/e44cda19-b8ea-4f37-8228-6beb1d6474b5-kube-api-access-prjqn\") pod \"dashboard-metrics-scraper-6ffb444bf9-rckpj\" (UID: \"e44cda19-b8ea-4f37-8228-6beb1d6474b5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rckpj"
	Nov 24 04:17:33 embed-certs-520529 kubelet[793]: I1124 04:17:33.514143     793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e44cda19-b8ea-4f37-8228-6beb1d6474b5-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-rckpj\" (UID: \"e44cda19-b8ea-4f37-8228-6beb1d6474b5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rckpj"
	Nov 24 04:17:33 embed-certs-520529 kubelet[793]: W1124 04:17:33.779492     793 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8a3eb121088a2884f162896a1fbffe937d27ff6bf1c385a43a0e9edc1839c5eb/crio-2836bb8d9da8440e237f5116d6c3a2bb34af4bff7193754a24c351039ddfb9f0 WatchSource:0}: Error finding container 2836bb8d9da8440e237f5116d6c3a2bb34af4bff7193754a24c351039ddfb9f0: Status 404 returned error can't find the container with id 2836bb8d9da8440e237f5116d6c3a2bb34af4bff7193754a24c351039ddfb9f0
	Nov 24 04:17:33 embed-certs-520529 kubelet[793]: W1124 04:17:33.795586     793 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8a3eb121088a2884f162896a1fbffe937d27ff6bf1c385a43a0e9edc1839c5eb/crio-ee17000730b0978eb3ce03dc51b839d5cc96e0553b54b4361b715e16cfb5d392 WatchSource:0}: Error finding container ee17000730b0978eb3ce03dc51b839d5cc96e0553b54b4361b715e16cfb5d392: Status 404 returned error can't find the container with id ee17000730b0978eb3ce03dc51b839d5cc96e0553b54b4361b715e16cfb5d392
	Nov 24 04:17:39 embed-certs-520529 kubelet[793]: I1124 04:17:39.256350     793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 24 04:17:44 embed-certs-520529 kubelet[793]: I1124 04:17:44.086918     793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ddq4w" podStartSLOduration=6.204109116 podStartE2EDuration="11.086896359s" podCreationTimestamp="2025-11-24 04:17:33 +0000 UTC" firstStartedPulling="2025-11-24 04:17:33.783296652 +0000 UTC m=+9.817445829" lastFinishedPulling="2025-11-24 04:17:38.666083895 +0000 UTC m=+14.700233072" observedRunningTime="2025-11-24 04:17:39.331515314 +0000 UTC m=+15.365664558" watchObservedRunningTime="2025-11-24 04:17:44.086896359 +0000 UTC m=+20.121045545"
	Nov 24 04:17:45 embed-certs-520529 kubelet[793]: I1124 04:17:45.339920     793 scope.go:117] "RemoveContainer" containerID="3d685f5f9bf15d8ae778347ddcd7240abc2e58579d9a2875b4764f0f9aef5ac3"
	Nov 24 04:17:46 embed-certs-520529 kubelet[793]: I1124 04:17:46.347721     793 scope.go:117] "RemoveContainer" containerID="3d685f5f9bf15d8ae778347ddcd7240abc2e58579d9a2875b4764f0f9aef5ac3"
	Nov 24 04:17:46 embed-certs-520529 kubelet[793]: I1124 04:17:46.348537     793 scope.go:117] "RemoveContainer" containerID="1b62b13a80b00219150218cf41a1b4d2a27a862fe743aa65e032a1608c19f5ac"
	Nov 24 04:17:46 embed-certs-520529 kubelet[793]: E1124 04:17:46.348878     793 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rckpj_kubernetes-dashboard(e44cda19-b8ea-4f37-8228-6beb1d6474b5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rckpj" podUID="e44cda19-b8ea-4f37-8228-6beb1d6474b5"
	Nov 24 04:17:47 embed-certs-520529 kubelet[793]: I1124 04:17:47.356912     793 scope.go:117] "RemoveContainer" containerID="1b62b13a80b00219150218cf41a1b4d2a27a862fe743aa65e032a1608c19f5ac"
	Nov 24 04:17:47 embed-certs-520529 kubelet[793]: E1124 04:17:47.357066     793 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rckpj_kubernetes-dashboard(e44cda19-b8ea-4f37-8228-6beb1d6474b5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rckpj" podUID="e44cda19-b8ea-4f37-8228-6beb1d6474b5"
	Nov 24 04:17:54 embed-certs-520529 kubelet[793]: I1124 04:17:54.667506     793 scope.go:117] "RemoveContainer" containerID="1b62b13a80b00219150218cf41a1b4d2a27a862fe743aa65e032a1608c19f5ac"
	Nov 24 04:17:54 embed-certs-520529 kubelet[793]: E1124 04:17:54.668222     793 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rckpj_kubernetes-dashboard(e44cda19-b8ea-4f37-8228-6beb1d6474b5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rckpj" podUID="e44cda19-b8ea-4f37-8228-6beb1d6474b5"
	Nov 24 04:18:01 embed-certs-520529 kubelet[793]: I1124 04:18:01.395030     793 scope.go:117] "RemoveContainer" containerID="3201ebcdcd96c85d6ccc8935814a307b94f8cb6caa93667464e51dc85132e068"
	Nov 24 04:18:07 embed-certs-520529 kubelet[793]: I1124 04:18:07.229235     793 scope.go:117] "RemoveContainer" containerID="1b62b13a80b00219150218cf41a1b4d2a27a862fe743aa65e032a1608c19f5ac"
	Nov 24 04:18:07 embed-certs-520529 kubelet[793]: I1124 04:18:07.414052     793 scope.go:117] "RemoveContainer" containerID="1b62b13a80b00219150218cf41a1b4d2a27a862fe743aa65e032a1608c19f5ac"
	Nov 24 04:18:07 embed-certs-520529 kubelet[793]: I1124 04:18:07.414566     793 scope.go:117] "RemoveContainer" containerID="13b95fe1883b52b2af09a03014debb9c88264e08051cf4f73c66109c0d914123"
	Nov 24 04:18:07 embed-certs-520529 kubelet[793]: E1124 04:18:07.414837     793 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rckpj_kubernetes-dashboard(e44cda19-b8ea-4f37-8228-6beb1d6474b5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rckpj" podUID="e44cda19-b8ea-4f37-8228-6beb1d6474b5"
	Nov 24 04:18:14 embed-certs-520529 kubelet[793]: I1124 04:18:14.667859     793 scope.go:117] "RemoveContainer" containerID="13b95fe1883b52b2af09a03014debb9c88264e08051cf4f73c66109c0d914123"
	Nov 24 04:18:14 embed-certs-520529 kubelet[793]: E1124 04:18:14.668482     793 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rckpj_kubernetes-dashboard(e44cda19-b8ea-4f37-8228-6beb1d6474b5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rckpj" podUID="e44cda19-b8ea-4f37-8228-6beb1d6474b5"
	Nov 24 04:18:23 embed-certs-520529 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 04:18:23 embed-certs-520529 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 04:18:23 embed-certs-520529 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [997f8bac617eec8cefe694cb39fb8f8ea3728aa8ff4e30ca40e239b9ab5d2a8a] <==
	2025/11/24 04:17:38 Using namespace: kubernetes-dashboard
	2025/11/24 04:17:38 Using in-cluster config to connect to apiserver
	2025/11/24 04:17:38 Using secret token for csrf signing
	2025/11/24 04:17:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 04:17:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 04:17:38 Successful initial request to the apiserver, version: v1.34.1
	2025/11/24 04:17:38 Generating JWE encryption key
	2025/11/24 04:17:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 04:17:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 04:17:39 Initializing JWE encryption key from synchronized object
	2025/11/24 04:17:39 Creating in-cluster Sidecar client
	2025/11/24 04:17:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 04:17:39 Serving insecurely on HTTP port: 9090
	2025/11/24 04:18:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 04:17:38 Starting overwatch
	
	
	==> storage-provisioner [3201ebcdcd96c85d6ccc8935814a307b94f8cb6caa93667464e51dc85132e068] <==
	I1124 04:17:31.129800       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 04:18:01.173049       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [32ff3b0eef3a48557f3abf7a60a0b3a38e475c5ff365fe65364698c01cc51e5c] <==
	I1124 04:18:01.511246       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 04:18:01.530824       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 04:18:01.531018       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 04:18:01.536350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:04.991805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:09.253059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:12.852617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:15.906582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:18.928825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:18.934406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 04:18:18.934935       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 04:18:18.935157       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-520529_1f9f25e8-bdc7-4235-8eae-352974b3dc75!
	I1124 04:18:18.945988       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ee581ba2-d5b1-413b-ba36-b573eee08872", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-520529_1f9f25e8-bdc7-4235-8eae-352974b3dc75 became leader
	W1124 04:18:18.949282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:18.961577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 04:18:19.038535       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-520529_1f9f25e8-bdc7-4235-8eae-352974b3dc75!
	W1124 04:18:20.964988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:20.971479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:22.975535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:22.986975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:24.990019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:25.005509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:27.014059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:27.019879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-520529 -n embed-certs-520529
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-520529 -n embed-certs-520529: exit status 2 (564.230842ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-520529 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
E1124 04:18:28.171382  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-520529
helpers_test.go:243: (dbg) docker inspect embed-certs-520529:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8a3eb121088a2884f162896a1fbffe937d27ff6bf1c385a43a0e9edc1839c5eb",
	        "Created": "2025-11-24T04:15:31.362300869Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 487412,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T04:17:17.10431829Z",
	            "FinishedAt": "2025-11-24T04:17:16.231804085Z"
	        },
	        "Image": "sha256:fbb44bc62521f331457dff002aaa5e1e27856f9e53853b3b3ee62969be454028",
	        "ResolvConfPath": "/var/lib/docker/containers/8a3eb121088a2884f162896a1fbffe937d27ff6bf1c385a43a0e9edc1839c5eb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8a3eb121088a2884f162896a1fbffe937d27ff6bf1c385a43a0e9edc1839c5eb/hostname",
	        "HostsPath": "/var/lib/docker/containers/8a3eb121088a2884f162896a1fbffe937d27ff6bf1c385a43a0e9edc1839c5eb/hosts",
	        "LogPath": "/var/lib/docker/containers/8a3eb121088a2884f162896a1fbffe937d27ff6bf1c385a43a0e9edc1839c5eb/8a3eb121088a2884f162896a1fbffe937d27ff6bf1c385a43a0e9edc1839c5eb-json.log",
	        "Name": "/embed-certs-520529",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-520529:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-520529",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8a3eb121088a2884f162896a1fbffe937d27ff6bf1c385a43a0e9edc1839c5eb",
	                "LowerDir": "/var/lib/docker/overlay2/802b4ddd893465d41da7d4aef59a4908de4bca3ef59f3154a91d2e1417b23762-init/diff:/var/lib/docker/overlay2/d0aa28be488ed1454a92ae6ce1d851cbc3e880fd8a322d63788b1835a346ec13/diff",
	                "MergedDir": "/var/lib/docker/overlay2/802b4ddd893465d41da7d4aef59a4908de4bca3ef59f3154a91d2e1417b23762/merged",
	                "UpperDir": "/var/lib/docker/overlay2/802b4ddd893465d41da7d4aef59a4908de4bca3ef59f3154a91d2e1417b23762/diff",
	                "WorkDir": "/var/lib/docker/overlay2/802b4ddd893465d41da7d4aef59a4908de4bca3ef59f3154a91d2e1417b23762/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-520529",
	                "Source": "/var/lib/docker/volumes/embed-certs-520529/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-520529",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-520529",
	                "name.minikube.sigs.k8s.io": "embed-certs-520529",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "43f5aa6dc6abb6c5afecb26151274da85a1c075060b8315f72c3ddfb672143f2",
	            "SandboxKey": "/var/run/docker/netns/43f5aa6dc6ab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-520529": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:25:e8:1f:6d:c7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e3e6fa2232739e2881841760b0f4ae6184afdbd9df8a88d4c082b05eeb608469",
	                    "EndpointID": "f1c5299e4182ffb676d65d68ebc9a3818eaad9fe664b76eb4391960344c4f6c3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-520529",
	                        "8a3eb121088a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-520529 -n embed-certs-520529
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-520529 -n embed-certs-520529: exit status 2 (477.424021ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-520529 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-520529 logs -n 25: (1.407554708s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-762702 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-762702       │ jenkins │ v1.37.0 │ 24 Nov 25 04:13 UTC │ 24 Nov 25 04:14 UTC │
	│ image   │ old-k8s-version-762702 image list --format=json                                                                                                                                                                                               │ old-k8s-version-762702       │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:14 UTC │
	│ pause   │ -p old-k8s-version-762702 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-762702       │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │                     │
	│ delete  │ -p old-k8s-version-762702                                                                                                                                                                                                                     │ old-k8s-version-762702       │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:14 UTC │
	│ delete  │ -p old-k8s-version-762702                                                                                                                                                                                                                     │ old-k8s-version-762702       │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:14 UTC │
	│ start   │ -p no-preload-600301 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:16 UTC │
	│ start   │ -p cert-expiration-918798 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-918798       │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:15 UTC │
	│ delete  │ -p cert-expiration-918798                                                                                                                                                                                                                     │ cert-expiration-918798       │ jenkins │ v1.37.0 │ 24 Nov 25 04:15 UTC │ 24 Nov 25 04:15 UTC │
	│ start   │ -p embed-certs-520529 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:15 UTC │ 24 Nov 25 04:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-600301 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │                     │
	│ stop    │ -p no-preload-600301 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │ 24 Nov 25 04:16 UTC │
	│ addons  │ enable dashboard -p no-preload-600301 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │ 24 Nov 25 04:16 UTC │
	│ start   │ -p no-preload-600301 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │ 24 Nov 25 04:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-520529 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │                     │
	│ stop    │ -p embed-certs-520529 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-520529 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ start   │ -p embed-certs-520529 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:18 UTC │
	│ image   │ no-preload-600301 image list --format=json                                                                                                                                                                                                    │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ pause   │ -p no-preload-600301 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │                     │
	│ delete  │ -p no-preload-600301                                                                                                                                                                                                                          │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ delete  │ -p no-preload-600301                                                                                                                                                                                                                          │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ delete  │ -p disable-driver-mounts-995056                                                                                                                                                                                                               │ disable-driver-mounts-995056 │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ start   │ -p default-k8s-diff-port-303179 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-303179 │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │                     │
	│ image   │ embed-certs-520529 image list --format=json                                                                                                                                                                                                   │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │ 24 Nov 25 04:18 UTC │
	│ pause   │ -p embed-certs-520529 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 04:17:50
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 04:17:50.694959  490948 out.go:360] Setting OutFile to fd 1 ...
	I1124 04:17:50.695087  490948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:17:50.695098  490948 out.go:374] Setting ErrFile to fd 2...
	I1124 04:17:50.695103  490948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:17:50.695357  490948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 04:17:50.695804  490948 out.go:368] Setting JSON to false
	I1124 04:17:50.696819  490948 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10800,"bootTime":1763947071,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 04:17:50.696896  490948 start.go:143] virtualization:  
	I1124 04:17:50.700880  490948 out.go:179] * [default-k8s-diff-port-303179] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 04:17:50.705074  490948 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 04:17:50.705146  490948 notify.go:221] Checking for updates...
	I1124 04:17:50.711561  490948 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 04:17:50.714778  490948 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:17:50.717845  490948 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	I1124 04:17:50.720989  490948 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 04:17:50.723973  490948 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 04:17:50.727454  490948 config.go:182] Loaded profile config "embed-certs-520529": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:17:50.727560  490948 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 04:17:50.765188  490948 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 04:17:50.765329  490948 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:17:50.823770  490948 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 04:17:50.814088299 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:17:50.823874  490948 docker.go:319] overlay module found
	I1124 04:17:50.827196  490948 out.go:179] * Using the docker driver based on user configuration
	I1124 04:17:50.830131  490948 start.go:309] selected driver: docker
	I1124 04:17:50.830153  490948 start.go:927] validating driver "docker" against <nil>
	I1124 04:17:50.830169  490948 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 04:17:50.831116  490948 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:17:50.887357  490948 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 04:17:50.878355609 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:17:50.887535  490948 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 04:17:50.887758  490948 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 04:17:50.890749  490948 out.go:179] * Using Docker driver with root privileges
	I1124 04:17:50.893658  490948 cni.go:84] Creating CNI manager for ""
	I1124 04:17:50.893729  490948 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:17:50.893750  490948 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 04:17:50.893837  490948 start.go:353] cluster config:
	{Name:default-k8s-diff-port-303179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:17:50.897053  490948 out.go:179] * Starting "default-k8s-diff-port-303179" primary control-plane node in "default-k8s-diff-port-303179" cluster
	I1124 04:17:50.899848  490948 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 04:17:50.902774  490948 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 04:17:50.905748  490948 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:17:50.905799  490948 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 04:17:50.905827  490948 cache.go:65] Caching tarball of preloaded images
	I1124 04:17:50.905833  490948 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 04:17:50.905912  490948 preload.go:238] Found /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 04:17:50.905922  490948 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 04:17:50.906023  490948 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/config.json ...
	I1124 04:17:50.906043  490948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/config.json: {Name:mke899bf3df2fc5c9ba13e5b10e48424e42ba10f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:17:50.926102  490948 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 04:17:50.926126  490948 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 04:17:50.926147  490948 cache.go:243] Successfully downloaded all kic artifacts
	I1124 04:17:50.926177  490948 start.go:360] acquireMachinesLock for default-k8s-diff-port-303179: {Name:mk876fcea2f12d71199d194b5970210275c2b905 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 04:17:50.926283  490948 start.go:364] duration metric: took 84.563µs to acquireMachinesLock for "default-k8s-diff-port-303179"
	I1124 04:17:50.926314  490948 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-303179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303179 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 04:17:50.926386  490948 start.go:125] createHost starting for "" (driver="docker")
	W1124 04:17:47.156470  487285 pod_ready.go:104] pod "coredns-66bc5c9577-bvwhr" is not "Ready", error: <nil>
	W1124 04:17:49.157131  487285 pod_ready.go:104] pod "coredns-66bc5c9577-bvwhr" is not "Ready", error: <nil>
	W1124 04:17:51.157796  487285 pod_ready.go:104] pod "coredns-66bc5c9577-bvwhr" is not "Ready", error: <nil>
	I1124 04:17:50.929768  490948 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 04:17:50.930010  490948 start.go:159] libmachine.API.Create for "default-k8s-diff-port-303179" (driver="docker")
	I1124 04:17:50.930046  490948 client.go:173] LocalClient.Create starting
	I1124 04:17:50.930160  490948 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem
	I1124 04:17:50.930204  490948 main.go:143] libmachine: Decoding PEM data...
	I1124 04:17:50.930221  490948 main.go:143] libmachine: Parsing certificate...
	I1124 04:17:50.930277  490948 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem
	I1124 04:17:50.930300  490948 main.go:143] libmachine: Decoding PEM data...
	I1124 04:17:50.930312  490948 main.go:143] libmachine: Parsing certificate...
	I1124 04:17:50.930721  490948 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-303179 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 04:17:50.951561  490948 cli_runner.go:211] docker network inspect default-k8s-diff-port-303179 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 04:17:50.951650  490948 network_create.go:284] running [docker network inspect default-k8s-diff-port-303179] to gather additional debugging logs...
	I1124 04:17:50.951673  490948 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-303179
	W1124 04:17:50.968687  490948 cli_runner.go:211] docker network inspect default-k8s-diff-port-303179 returned with exit code 1
	I1124 04:17:50.968778  490948 network_create.go:287] error running [docker network inspect default-k8s-diff-port-303179]: docker network inspect default-k8s-diff-port-303179: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-303179 not found
	I1124 04:17:50.968797  490948 network_create.go:289] output of [docker network inspect default-k8s-diff-port-303179]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-303179 not found
	
	** /stderr **
	I1124 04:17:50.968916  490948 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 04:17:50.987083  490948 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-740fb099fccc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:52:7a:9c:b0:4d:41} reservation:<nil>}
	I1124 04:17:50.987483  490948 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4b0f25a7c590 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:53:b3:a1:55:1a} reservation:<nil>}
	I1124 04:17:50.987741  490948 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c1d995330d2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:83:d9:0c:83:10} reservation:<nil>}
	I1124 04:17:50.988046  490948 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e3e6fa223273 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:be:59:ed:b0:cb:f8} reservation:<nil>}
	I1124 04:17:50.988491  490948 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a54a80}
	I1124 04:17:50.988517  490948 network_create.go:124] attempt to create docker network default-k8s-diff-port-303179 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1124 04:17:50.988668  490948 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-303179 default-k8s-diff-port-303179
	I1124 04:17:51.072462  490948 network_create.go:108] docker network default-k8s-diff-port-303179 192.168.85.0/24 created
	I1124 04:17:51.072497  490948 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-303179" container
	I1124 04:17:51.072588  490948 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 04:17:51.089728  490948 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-303179 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-303179 --label created_by.minikube.sigs.k8s.io=true
	I1124 04:17:51.114616  490948 oci.go:103] Successfully created a docker volume default-k8s-diff-port-303179
	I1124 04:17:51.114714  490948 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-303179-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-303179 --entrypoint /usr/bin/test -v default-k8s-diff-port-303179:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 04:17:51.679383  490948 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-303179
	I1124 04:17:51.679472  490948 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:17:51.679489  490948 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 04:17:51.679560  490948 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-303179:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	W1124 04:17:53.656741  487285 pod_ready.go:104] pod "coredns-66bc5c9577-bvwhr" is not "Ready", error: <nil>
	W1124 04:17:56.156679  487285 pod_ready.go:104] pod "coredns-66bc5c9577-bvwhr" is not "Ready", error: <nil>
	I1124 04:17:56.123329  490948 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-303179:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (4.443721196s)
	I1124 04:17:56.123365  490948 kic.go:203] duration metric: took 4.44387154s to extract preloaded images to volume ...
	W1124 04:17:56.123517  490948 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 04:17:56.123635  490948 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 04:17:56.184270  490948 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-303179 --name default-k8s-diff-port-303179 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-303179 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-303179 --network default-k8s-diff-port-303179 --ip 192.168.85.2 --volume default-k8s-diff-port-303179:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 04:17:56.497586  490948 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303179 --format={{.State.Running}}
	I1124 04:17:56.519383  490948 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303179 --format={{.State.Status}}
	I1124 04:17:56.545782  490948 cli_runner.go:164] Run: docker exec default-k8s-diff-port-303179 stat /var/lib/dpkg/alternatives/iptables
	I1124 04:17:56.602646  490948 oci.go:144] the created container "default-k8s-diff-port-303179" has a running status.
	I1124 04:17:56.602676  490948 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa...
	I1124 04:17:56.838776  490948 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 04:17:56.874113  490948 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303179 --format={{.State.Status}}
	I1124 04:17:56.898295  490948 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 04:17:56.898315  490948 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-303179 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 04:17:56.950176  490948 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303179 --format={{.State.Status}}
	I1124 04:17:56.984819  490948 machine.go:94] provisionDockerMachine start ...
	I1124 04:17:56.984912  490948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:17:57.023465  490948 main.go:143] libmachine: Using SSH client type: native
	I1124 04:17:57.023819  490948 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1124 04:17:57.023834  490948 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 04:17:57.024509  490948 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 04:18:00.396368  490948 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-303179
	
	I1124 04:18:00.396396  490948 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-303179"
	I1124 04:18:00.396478  490948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:18:00.438129  490948 main.go:143] libmachine: Using SSH client type: native
	I1124 04:18:00.438512  490948 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1124 04:18:00.438528  490948 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-303179 && echo "default-k8s-diff-port-303179" | sudo tee /etc/hostname
	I1124 04:18:00.600406  490948 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-303179
	
	I1124 04:18:00.600527  490948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:18:00.619075  490948 main.go:143] libmachine: Using SSH client type: native
	I1124 04:18:00.619398  490948 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1124 04:18:00.619420  490948 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-303179' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-303179/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-303179' | sudo tee -a /etc/hosts; 
				fi
			fi
	W1124 04:17:58.656163  487285 pod_ready.go:104] pod "coredns-66bc5c9577-bvwhr" is not "Ready", error: <nil>
	W1124 04:18:00.659667  487285 pod_ready.go:104] pod "coredns-66bc5c9577-bvwhr" is not "Ready", error: <nil>
	I1124 04:18:00.766592  490948 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 04:18:00.766620  490948 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-289526/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-289526/.minikube}
	I1124 04:18:00.766649  490948 ubuntu.go:190] setting up certificates
	I1124 04:18:00.766660  490948 provision.go:84] configureAuth start
	I1124 04:18:00.766721  490948 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-303179
	I1124 04:18:00.784043  490948 provision.go:143] copyHostCerts
	I1124 04:18:00.784115  490948 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem, removing ...
	I1124 04:18:00.784129  490948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem
	I1124 04:18:00.784209  490948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem (1123 bytes)
	I1124 04:18:00.784312  490948 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem, removing ...
	I1124 04:18:00.784323  490948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem
	I1124 04:18:00.784351  490948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem (1675 bytes)
	I1124 04:18:00.784408  490948 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem, removing ...
	I1124 04:18:00.784419  490948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem
	I1124 04:18:00.784444  490948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem (1082 bytes)
	I1124 04:18:00.784490  490948 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-303179 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-303179 localhost minikube]
	I1124 04:18:01.212935  490948 provision.go:177] copyRemoteCerts
	I1124 04:18:01.213014  490948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 04:18:01.213055  490948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:18:01.233444  490948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa Username:docker}
	I1124 04:18:01.339901  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 04:18:01.358998  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1124 04:18:01.378332  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 04:18:01.405132  490948 provision.go:87] duration metric: took 638.445601ms to configureAuth
	I1124 04:18:01.405177  490948 ubuntu.go:206] setting minikube options for container-runtime
	I1124 04:18:01.405434  490948 config.go:182] Loaded profile config "default-k8s-diff-port-303179": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:18:01.405611  490948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:18:01.434745  490948 main.go:143] libmachine: Using SSH client type: native
	I1124 04:18:01.435192  490948 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1124 04:18:01.435221  490948 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 04:18:01.853859  490948 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 04:18:01.853922  490948 machine.go:97] duration metric: took 4.869076984s to provisionDockerMachine
	I1124 04:18:01.853947  490948 client.go:176] duration metric: took 10.923888086s to LocalClient.Create
	I1124 04:18:01.853977  490948 start.go:167] duration metric: took 10.923967694s to libmachine.API.Create "default-k8s-diff-port-303179"
	I1124 04:18:01.853985  490948 start.go:293] postStartSetup for "default-k8s-diff-port-303179" (driver="docker")
	I1124 04:18:01.853995  490948 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 04:18:01.854063  490948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 04:18:01.854109  490948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:18:01.873098  490948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa Username:docker}
	I1124 04:18:01.978896  490948 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 04:18:01.982403  490948 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 04:18:01.982434  490948 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 04:18:01.982446  490948 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/addons for local assets ...
	I1124 04:18:01.982527  490948 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/files for local assets ...
	I1124 04:18:01.982638  490948 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem -> 2913892.pem in /etc/ssl/certs
	I1124 04:18:01.982743  490948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 04:18:01.991409  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:18:02.040594  490948 start.go:296] duration metric: took 186.59428ms for postStartSetup
	I1124 04:18:02.041048  490948 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-303179
	I1124 04:18:02.064970  490948 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/config.json ...
	I1124 04:18:02.065285  490948 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 04:18:02.065327  490948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:18:02.084658  490948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa Username:docker}
	I1124 04:18:02.191557  490948 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 04:18:02.196267  490948 start.go:128] duration metric: took 11.26986634s to createHost
	I1124 04:18:02.196292  490948 start.go:83] releasing machines lock for "default-k8s-diff-port-303179", held for 11.269995942s
	I1124 04:18:02.196410  490948 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-303179
	I1124 04:18:02.213474  490948 ssh_runner.go:195] Run: cat /version.json
	I1124 04:18:02.213532  490948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:18:02.213546  490948 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 04:18:02.213601  490948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:18:02.239773  490948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa Username:docker}
	I1124 04:18:02.256553  490948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa Username:docker}
	I1124 04:18:02.441276  490948 ssh_runner.go:195] Run: systemctl --version
	I1124 04:18:02.448781  490948 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 04:18:02.492615  490948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 04:18:02.497062  490948 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 04:18:02.497194  490948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 04:18:02.527097  490948 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 04:18:02.527118  490948 start.go:496] detecting cgroup driver to use...
	I1124 04:18:02.527151  490948 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 04:18:02.527201  490948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 04:18:02.545934  490948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 04:18:02.559870  490948 docker.go:218] disabling cri-docker service (if available) ...
	I1124 04:18:02.559983  490948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 04:18:02.578714  490948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 04:18:02.601235  490948 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 04:18:02.748994  490948 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 04:18:02.884388  490948 docker.go:234] disabling docker service ...
	I1124 04:18:02.884517  490948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 04:18:02.906075  490948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 04:18:02.924129  490948 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 04:18:03.047696  490948 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 04:18:03.174469  490948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 04:18:03.187843  490948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 04:18:03.203545  490948 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 04:18:03.203615  490948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:18:03.212547  490948 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 04:18:03.212692  490948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:18:03.221891  490948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:18:03.231279  490948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:18:03.240546  490948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 04:18:03.248760  490948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:18:03.257558  490948 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:18:03.271830  490948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:18:03.280639  490948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 04:18:03.288525  490948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 04:18:03.295952  490948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:18:03.418756  490948 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 04:18:03.590568  490948 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 04:18:03.590660  490948 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 04:18:03.594598  490948 start.go:564] Will wait 60s for crictl version
	I1124 04:18:03.594691  490948 ssh_runner.go:195] Run: which crictl
	I1124 04:18:03.598163  490948 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 04:18:03.623147  490948 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 04:18:03.623258  490948 ssh_runner.go:195] Run: crio --version
	I1124 04:18:03.659678  490948 ssh_runner.go:195] Run: crio --version
	I1124 04:18:03.700550  490948 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 04:18:03.703438  490948 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-303179 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 04:18:03.720282  490948 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 04:18:03.724485  490948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:18:03.734430  490948 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-303179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303179 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 04:18:03.734629  490948 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:18:03.734694  490948 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:18:03.773107  490948 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:18:03.773134  490948 crio.go:433] Images already preloaded, skipping extraction
	I1124 04:18:03.773187  490948 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:18:03.799519  490948 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:18:03.799544  490948 cache_images.go:86] Images are preloaded, skipping loading
	I1124 04:18:03.799552  490948 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1124 04:18:03.799642  490948 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-303179 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 04:18:03.799722  490948 ssh_runner.go:195] Run: crio config
	I1124 04:18:03.864437  490948 cni.go:84] Creating CNI manager for ""
	I1124 04:18:03.864463  490948 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:18:03.864480  490948 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 04:18:03.864602  490948 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-303179 NodeName:default-k8s-diff-port-303179 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 04:18:03.864777  490948 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-303179"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 04:18:03.864855  490948 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 04:18:03.873154  490948 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 04:18:03.873281  490948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 04:18:03.881221  490948 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1124 04:18:03.895053  490948 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 04:18:03.908149  490948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1124 04:18:03.920909  490948 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 04:18:03.924490  490948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:18:03.934755  490948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:18:04.052655  490948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:18:04.077352  490948 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179 for IP: 192.168.85.2
	I1124 04:18:04.077415  490948 certs.go:195] generating shared ca certs ...
	I1124 04:18:04.077448  490948 certs.go:227] acquiring lock for ca certs: {Name:mk13d1a72b5c7901cf99d88e558f62c6b8512807 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:18:04.077609  490948 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key
	I1124 04:18:04.077683  490948 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key
	I1124 04:18:04.077708  490948 certs.go:257] generating profile certs ...
	I1124 04:18:04.077784  490948 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/client.key
	I1124 04:18:04.077814  490948 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/client.crt with IP's: []
	I1124 04:18:04.195277  490948 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/client.crt ...
	I1124 04:18:04.195309  490948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/client.crt: {Name:mk36afb6ae7e610b32c198c0358f47b75f2fb0e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:18:04.195509  490948 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/client.key ...
	I1124 04:18:04.195526  490948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/client.key: {Name:mkcd9c70b7eacc96e3029affa66d338ba32ec593 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:18:04.195625  490948 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.key.0cae04f4
	I1124 04:18:04.195643  490948 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.crt.0cae04f4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 04:18:04.594712  490948 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.crt.0cae04f4 ...
	I1124 04:18:04.594745  490948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.crt.0cae04f4: {Name:mkfe7b719c01bb3e8edd08cd2a3055a6edfa7b3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:18:04.594946  490948 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.key.0cae04f4 ...
	I1124 04:18:04.594962  490948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.key.0cae04f4: {Name:mk54f2bd8adc5d6cce414d204881c5560ea31ef3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:18:04.595049  490948 certs.go:382] copying /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.crt.0cae04f4 -> /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.crt
	I1124 04:18:04.595139  490948 certs.go:386] copying /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.key.0cae04f4 -> /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.key
	I1124 04:18:04.595204  490948 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/proxy-client.key
	I1124 04:18:04.595222  490948 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/proxy-client.crt with IP's: []
	I1124 04:18:04.718777  490948 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/proxy-client.crt ...
	I1124 04:18:04.718815  490948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/proxy-client.crt: {Name:mk4e9537ed1729474e7e143c6f815468d69786cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:18:04.718987  490948 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/proxy-client.key ...
	I1124 04:18:04.719001  490948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/proxy-client.key: {Name:mk010733f937fce97398c8edc10774eb9ccf13bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:18:04.719202  490948 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem (1338 bytes)
	W1124 04:18:04.719250  490948 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389_empty.pem, impossibly tiny 0 bytes
	I1124 04:18:04.719264  490948 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 04:18:04.719292  490948 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem (1082 bytes)
	I1124 04:18:04.719322  490948 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem (1123 bytes)
	I1124 04:18:04.719350  490948 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem (1675 bytes)
	I1124 04:18:04.719402  490948 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:18:04.720068  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 04:18:04.739300  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 04:18:04.757875  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 04:18:04.776252  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 04:18:04.794316  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 04:18:04.812782  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 04:18:04.830576  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 04:18:04.848757  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 04:18:04.867213  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem --> /usr/share/ca-certificates/291389.pem (1338 bytes)
	I1124 04:18:04.890200  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /usr/share/ca-certificates/2913892.pem (1708 bytes)
	I1124 04:18:04.919345  490948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 04:18:04.941488  490948 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 04:18:04.955441  490948 ssh_runner.go:195] Run: openssl version
	I1124 04:18:04.962367  490948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 04:18:04.970643  490948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:18:04.974359  490948 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 03:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:18:04.974509  490948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:18:05.017311  490948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 04:18:05.026443  490948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291389.pem && ln -fs /usr/share/ca-certificates/291389.pem /etc/ssl/certs/291389.pem"
	I1124 04:18:05.035426  490948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291389.pem
	I1124 04:18:05.039491  490948 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 03:20 /usr/share/ca-certificates/291389.pem
	I1124 04:18:05.039615  490948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291389.pem
	I1124 04:18:05.087018  490948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291389.pem /etc/ssl/certs/51391683.0"
	I1124 04:18:05.096162  490948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2913892.pem && ln -fs /usr/share/ca-certificates/2913892.pem /etc/ssl/certs/2913892.pem"
	I1124 04:18:05.105567  490948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2913892.pem
	I1124 04:18:05.109969  490948 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 03:20 /usr/share/ca-certificates/2913892.pem
	I1124 04:18:05.110066  490948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2913892.pem
	I1124 04:18:05.152177  490948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2913892.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 04:18:05.162100  490948 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 04:18:05.165887  490948 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 04:18:05.165943  490948 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-303179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303179 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:18:05.166017  490948 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 04:18:05.166074  490948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 04:18:05.194311  490948 cri.go:89] found id: ""
	I1124 04:18:05.194387  490948 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 04:18:05.202508  490948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 04:18:05.211081  490948 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 04:18:05.211243  490948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 04:18:05.219461  490948 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 04:18:05.219490  490948 kubeadm.go:158] found existing configuration files:
	
	I1124 04:18:05.219551  490948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1124 04:18:05.227300  490948 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 04:18:05.227366  490948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 04:18:05.235282  490948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1124 04:18:05.243173  490948 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 04:18:05.243252  490948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 04:18:05.250527  490948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1124 04:18:05.258224  490948 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 04:18:05.258337  490948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 04:18:05.265796  490948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1124 04:18:05.273481  490948 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 04:18:05.273559  490948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 04:18:05.281228  490948 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 04:18:05.321199  490948 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 04:18:05.321480  490948 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 04:18:05.345618  490948 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 04:18:05.345734  490948 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 04:18:05.345801  490948 kubeadm.go:319] OS: Linux
	I1124 04:18:05.345906  490948 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 04:18:05.345991  490948 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 04:18:05.346059  490948 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 04:18:05.346113  490948 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 04:18:05.346179  490948 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 04:18:05.346261  490948 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 04:18:05.346324  490948 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 04:18:05.346374  490948 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 04:18:05.346420  490948 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 04:18:05.424123  490948 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 04:18:05.424323  490948 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 04:18:05.424460  490948 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 04:18:05.434871  490948 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 04:18:05.441714  490948 out.go:252]   - Generating certificates and keys ...
	I1124 04:18:05.441861  490948 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 04:18:05.441947  490948 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	W1124 04:18:03.157589  487285 pod_ready.go:104] pod "coredns-66bc5c9577-bvwhr" is not "Ready", error: <nil>
	W1124 04:18:05.157675  487285 pod_ready.go:104] pod "coredns-66bc5c9577-bvwhr" is not "Ready", error: <nil>
	I1124 04:18:05.713614  490948 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 04:18:06.976046  490948 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 04:18:07.336487  490948 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 04:18:08.257018  490948 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 04:18:08.373979  490948 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 04:18:08.374366  490948 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-303179 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 04:18:08.696003  490948 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 04:18:08.696335  490948 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-303179 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 04:18:10.079153  490948 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 04:18:10.598352  490948 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	W1124 04:18:07.657455  487285 pod_ready.go:104] pod "coredns-66bc5c9577-bvwhr" is not "Ready", error: <nil>
	I1124 04:18:09.657711  487285 pod_ready.go:94] pod "coredns-66bc5c9577-bvwhr" is "Ready"
	I1124 04:18:09.657736  487285 pod_ready.go:86] duration metric: took 38.007659988s for pod "coredns-66bc5c9577-bvwhr" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:18:09.661332  487285 pod_ready.go:83] waiting for pod "etcd-embed-certs-520529" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:18:09.667255  487285 pod_ready.go:94] pod "etcd-embed-certs-520529" is "Ready"
	I1124 04:18:09.667282  487285 pod_ready.go:86] duration metric: took 5.922887ms for pod "etcd-embed-certs-520529" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:18:09.670599  487285 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-520529" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:18:09.676234  487285 pod_ready.go:94] pod "kube-apiserver-embed-certs-520529" is "Ready"
	I1124 04:18:09.676259  487285 pod_ready.go:86] duration metric: took 5.637731ms for pod "kube-apiserver-embed-certs-520529" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:18:09.679245  487285 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-520529" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:18:09.855508  487285 pod_ready.go:94] pod "kube-controller-manager-embed-certs-520529" is "Ready"
	I1124 04:18:09.855601  487285 pod_ready.go:86] duration metric: took 176.277847ms for pod "kube-controller-manager-embed-certs-520529" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:18:10.055357  487285 pod_ready.go:83] waiting for pod "kube-proxy-dt4th" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:18:10.459884  487285 pod_ready.go:94] pod "kube-proxy-dt4th" is "Ready"
	I1124 04:18:10.459906  487285 pod_ready.go:86] duration metric: took 404.472331ms for pod "kube-proxy-dt4th" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:18:10.654861  487285 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-520529" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:18:11.056775  487285 pod_ready.go:94] pod "kube-scheduler-embed-certs-520529" is "Ready"
	I1124 04:18:11.056805  487285 pod_ready.go:86] duration metric: took 401.921252ms for pod "kube-scheduler-embed-certs-520529" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:18:11.056820  487285 pod_ready.go:40] duration metric: took 39.410703509s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 04:18:11.147150  487285 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 04:18:11.150432  487285 out.go:179] * Done! kubectl is now configured to use "embed-certs-520529" cluster and "default" namespace by default
	I1124 04:18:11.119798  490948 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 04:18:11.120614  490948 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 04:18:11.457142  490948 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 04:18:11.987946  490948 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 04:18:12.203712  490948 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 04:18:12.363411  490948 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 04:18:12.722871  490948 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 04:18:12.723496  490948 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 04:18:12.726526  490948 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 04:18:12.730302  490948 out.go:252]   - Booting up control plane ...
	I1124 04:18:12.730429  490948 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 04:18:12.730576  490948 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 04:18:12.732554  490948 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 04:18:12.748631  490948 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 04:18:12.748975  490948 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 04:18:12.757548  490948 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 04:18:12.757926  490948 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 04:18:12.758139  490948 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 04:18:12.900630  490948 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 04:18:12.900756  490948 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 04:18:13.902906  490948 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002325882s
	I1124 04:18:13.907840  490948 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 04:18:13.907937  490948 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1124 04:18:13.908225  490948 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 04:18:13.908320  490948 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 04:18:18.510187  490948 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.601774026s
	I1124 04:18:19.054973  490948 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.147039664s
	I1124 04:18:20.910765  490948 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.002816757s
	I1124 04:18:20.934561  490948 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 04:18:20.968729  490948 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 04:18:20.987520  490948 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 04:18:20.987736  490948 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-303179 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 04:18:21.006778  490948 kubeadm.go:319] [bootstrap-token] Using token: 3da3my.so862l6ukbwktov0
	I1124 04:18:21.009580  490948 out.go:252]   - Configuring RBAC rules ...
	I1124 04:18:21.009710  490948 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 04:18:21.015114  490948 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 04:18:21.026239  490948 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 04:18:21.033780  490948 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 04:18:21.038963  490948 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 04:18:21.043709  490948 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 04:18:21.319249  490948 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 04:18:21.760875  490948 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 04:18:22.319509  490948 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 04:18:22.320685  490948 kubeadm.go:319] 
	I1124 04:18:22.320766  490948 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 04:18:22.320779  490948 kubeadm.go:319] 
	I1124 04:18:22.320864  490948 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 04:18:22.320875  490948 kubeadm.go:319] 
	I1124 04:18:22.320900  490948 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 04:18:22.320964  490948 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 04:18:22.321025  490948 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 04:18:22.321039  490948 kubeadm.go:319] 
	I1124 04:18:22.321093  490948 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 04:18:22.321102  490948 kubeadm.go:319] 
	I1124 04:18:22.321158  490948 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 04:18:22.321164  490948 kubeadm.go:319] 
	I1124 04:18:22.321215  490948 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 04:18:22.321296  490948 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 04:18:22.321369  490948 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 04:18:22.321377  490948 kubeadm.go:319] 
	I1124 04:18:22.321461  490948 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 04:18:22.321543  490948 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 04:18:22.321552  490948 kubeadm.go:319] 
	I1124 04:18:22.321642  490948 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 3da3my.so862l6ukbwktov0 \
	I1124 04:18:22.321752  490948 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2194168f8292dc0469a2699229b86460c4906ab7e633f8753935c94ddff37855 \
	I1124 04:18:22.321779  490948 kubeadm.go:319] 	--control-plane 
	I1124 04:18:22.321784  490948 kubeadm.go:319] 
	I1124 04:18:22.321868  490948 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 04:18:22.321875  490948 kubeadm.go:319] 
	I1124 04:18:22.321978  490948 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 3da3my.so862l6ukbwktov0 \
	I1124 04:18:22.322085  490948 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2194168f8292dc0469a2699229b86460c4906ab7e633f8753935c94ddff37855 
	I1124 04:18:22.326923  490948 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 04:18:22.327143  490948 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 04:18:22.327248  490948 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 04:18:22.327268  490948 cni.go:84] Creating CNI manager for ""
	I1124 04:18:22.327275  490948 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:18:22.330432  490948 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 04:18:22.333486  490948 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 04:18:22.337894  490948 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 04:18:22.337915  490948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 04:18:22.355430  490948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 04:18:22.679211  490948 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 04:18:22.679330  490948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:18:22.679405  490948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-303179 minikube.k8s.io/updated_at=2025_11_24T04_18_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=default-k8s-diff-port-303179 minikube.k8s.io/primary=true
	I1124 04:18:23.124999  490948 ops.go:34] apiserver oom_adj: -16
	I1124 04:18:23.125101  490948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:18:23.625157  490948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:18:24.125761  490948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:18:24.625276  490948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:18:25.125242  490948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:18:25.625157  490948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:18:26.126009  490948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:18:26.625229  490948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:18:26.806445  490948 kubeadm.go:1114] duration metric: took 4.127159928s to wait for elevateKubeSystemPrivileges
	I1124 04:18:26.806493  490948 kubeadm.go:403] duration metric: took 21.640554265s to StartCluster
	I1124 04:18:26.806510  490948 settings.go:142] acquiring lock: {Name:mkbb4fe81234aed150705ba63a85f83232f71688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:18:26.806600  490948 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:18:26.808160  490948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/kubeconfig: {Name:mkb927d54c1c5489b1562496c31c4a11f46f8c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:18:26.808393  490948 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 04:18:26.808647  490948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 04:18:26.809021  490948 config.go:182] Loaded profile config "default-k8s-diff-port-303179": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:18:26.809069  490948 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 04:18:26.809155  490948 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-303179"
	I1124 04:18:26.809172  490948 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-303179"
	I1124 04:18:26.809201  490948 host.go:66] Checking if "default-k8s-diff-port-303179" exists ...
	I1124 04:18:26.809460  490948 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-303179"
	I1124 04:18:26.809481  490948 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-303179"
	I1124 04:18:26.809786  490948 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303179 --format={{.State.Status}}
	I1124 04:18:26.809788  490948 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303179 --format={{.State.Status}}
	I1124 04:18:26.813756  490948 out.go:179] * Verifying Kubernetes components...
	I1124 04:18:26.819098  490948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:18:26.842427  490948 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-303179"
	I1124 04:18:26.842500  490948 host.go:66] Checking if "default-k8s-diff-port-303179" exists ...
	I1124 04:18:26.842931  490948 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303179 --format={{.State.Status}}
	I1124 04:18:26.863062  490948 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 04:18:26.868098  490948 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:18:26.868122  490948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 04:18:26.868184  490948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:18:26.890658  490948 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 04:18:26.890691  490948 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 04:18:26.890773  490948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:18:26.906290  490948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa Username:docker}
	I1124 04:18:26.931674  490948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa Username:docker}
	I1124 04:18:27.322970  490948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:18:27.346605  490948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 04:18:27.346732  490948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:18:27.361338  490948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 04:18:28.447462  490948 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.124457194s)
	I1124 04:18:28.447527  490948 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.100769345s)
	I1124 04:18:28.447586  490948 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.10094831s)
	I1124 04:18:28.447601  490948 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 04:18:28.448623  490948 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-303179" to be "Ready" ...
	I1124 04:18:28.448947  490948 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.087547415s)
	I1124 04:18:28.503043  490948 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Nov 24 04:18:07 embed-certs-520529 crio[663]: time="2025-11-24T04:18:07.233523809Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:18:07 embed-certs-520529 crio[663]: time="2025-11-24T04:18:07.245295377Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:18:07 embed-certs-520529 crio[663]: time="2025-11-24T04:18:07.246397946Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:18:07 embed-certs-520529 crio[663]: time="2025-11-24T04:18:07.28729789Z" level=info msg="Created container 13b95fe1883b52b2af09a03014debb9c88264e08051cf4f73c66109c0d914123: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rckpj/dashboard-metrics-scraper" id=6adf30cf-cb04-46a4-a0c9-0b0da694678c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:18:07 embed-certs-520529 crio[663]: time="2025-11-24T04:18:07.294614617Z" level=info msg="Starting container: 13b95fe1883b52b2af09a03014debb9c88264e08051cf4f73c66109c0d914123" id=ef00a7ab-532c-4eda-be8f-fe79d9c4c981 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:18:07 embed-certs-520529 crio[663]: time="2025-11-24T04:18:07.299925886Z" level=info msg="Started container" PID=1689 containerID=13b95fe1883b52b2af09a03014debb9c88264e08051cf4f73c66109c0d914123 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rckpj/dashboard-metrics-scraper id=ef00a7ab-532c-4eda-be8f-fe79d9c4c981 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ee17000730b0978eb3ce03dc51b839d5cc96e0553b54b4361b715e16cfb5d392
	Nov 24 04:18:07 embed-certs-520529 conmon[1687]: conmon 13b95fe1883b52b2af09 <ninfo>: container 1689 exited with status 1
	Nov 24 04:18:07 embed-certs-520529 crio[663]: time="2025-11-24T04:18:07.417264306Z" level=info msg="Removing container: 1b62b13a80b00219150218cf41a1b4d2a27a862fe743aa65e032a1608c19f5ac" id=981daa3f-3852-4a63-ba52-851b8c2bc8fd name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 04:18:07 embed-certs-520529 crio[663]: time="2025-11-24T04:18:07.432024694Z" level=info msg="Error loading conmon cgroup of container 1b62b13a80b00219150218cf41a1b4d2a27a862fe743aa65e032a1608c19f5ac: cgroup deleted" id=981daa3f-3852-4a63-ba52-851b8c2bc8fd name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 04:18:07 embed-certs-520529 crio[663]: time="2025-11-24T04:18:07.438309022Z" level=info msg="Removed container 1b62b13a80b00219150218cf41a1b4d2a27a862fe743aa65e032a1608c19f5ac: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rckpj/dashboard-metrics-scraper" id=981daa3f-3852-4a63-ba52-851b8c2bc8fd name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.045675382Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.063468825Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.063653688Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.063754145Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.090907966Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.090942132Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.090962465Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.114619881Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.114654261Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.114670614Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.125120959Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.125157636Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.125179823Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.144888317Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:18:11 embed-certs-520529 crio[663]: time="2025-11-24T04:18:11.144933643Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	13b95fe1883b5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago       Exited              dashboard-metrics-scraper   2                   ee17000730b09       dashboard-metrics-scraper-6ffb444bf9-rckpj   kubernetes-dashboard
	32ff3b0eef3a4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   cc593f28ee306       storage-provisioner                          kube-system
	997f8bac617ee       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   50 seconds ago       Running             kubernetes-dashboard        0                   2836bb8d9da84       kubernetes-dashboard-855c9754f9-ddq4w        kubernetes-dashboard
	13bbdb8cd12e6       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   89bfabb6f236b       busybox                                      default
	0558f06299e5c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   943cf1d4ec820       coredns-66bc5c9577-bvwhr                     kube-system
	a71adf18e73dd       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   f00b57316239d       kube-proxy-dt4th                             kube-system
	cb80ca0ac5438       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   d4f7b8158c9a1       kindnet-tkncp                                kube-system
	3201ebcdcd96c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           58 seconds ago       Exited              storage-provisioner         1                   cc593f28ee306       storage-provisioner                          kube-system
	88db140510be7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   8f148e20780aa       kube-apiserver-embed-certs-520529            kube-system
	46b464c3ef546       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   8ce6db1e7dfca       etcd-embed-certs-520529                      kube-system
	8ecff8f50d392       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   b7fc262b95ac6       kube-controller-manager-embed-certs-520529   kube-system
	dbe92e9527424       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   b31db0f2a32d6       kube-scheduler-embed-certs-520529            kube-system
	
	
	==> coredns [0558f06299e5c2fe843cf590ec463909b96624be39d7369aaf0d96a1bfd563ac] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35852 - 64584 "HINFO IN 7236708362425433878.2655873434030849362. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004835178s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-520529
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-520529
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=embed-certs-520529
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T04_16_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 04:15:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-520529
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 04:18:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 04:18:00 +0000   Mon, 24 Nov 2025 04:15:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 04:18:00 +0000   Mon, 24 Nov 2025 04:15:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 04:18:00 +0000   Mon, 24 Nov 2025 04:15:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 04:18:00 +0000   Mon, 24 Nov 2025 04:16:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-520529
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 304a86241bf1bbb85bd31db5692386d7
	  System UUID:                cb05b9d1-526c-48cf-b8c9-27f04aa8373b
	  Boot ID:                    e6ca431c-3a35-478f-87f6-f49cc4bc8a65
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 coredns-66bc5c9577-bvwhr                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m23s
	  kube-system                 etcd-embed-certs-520529                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m28s
	  kube-system                 kindnet-tkncp                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m24s
	  kube-system                 kube-apiserver-embed-certs-520529             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-controller-manager-embed-certs-520529    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-proxy-dt4th                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-scheduler-embed-certs-520529             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-rckpj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ddq4w         0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m21s                  kube-proxy       
	  Normal   Starting                 57s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m38s (x8 over 2m38s)  kubelet          Node embed-certs-520529 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m38s (x8 over 2m38s)  kubelet          Node embed-certs-520529 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m38s (x8 over 2m38s)  kubelet          Node embed-certs-520529 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m28s                  kubelet          Node embed-certs-520529 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m28s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m28s                  kubelet          Node embed-certs-520529 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m28s                  kubelet          Node embed-certs-520529 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m28s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m24s                  node-controller  Node embed-certs-520529 event: Registered Node embed-certs-520529 in Controller
	  Normal   NodeReady                103s                   kubelet          Node embed-certs-520529 status is now: NodeReady
	  Normal   Starting                 65s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 65s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  65s (x8 over 65s)      kubelet          Node embed-certs-520529 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    65s (x8 over 65s)      kubelet          Node embed-certs-520529 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     65s (x8 over 65s)      kubelet          Node embed-certs-520529 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                    node-controller  Node embed-certs-520529 event: Registered Node embed-certs-520529 in Controller
	
	
	==> dmesg <==
	[Nov24 03:55] overlayfs: idmapped layers are currently not supported
	[Nov24 03:56] overlayfs: idmapped layers are currently not supported
	[Nov24 03:57] overlayfs: idmapped layers are currently not supported
	[  +3.077077] overlayfs: idmapped layers are currently not supported
	[Nov24 03:58] overlayfs: idmapped layers are currently not supported
	[ +17.825126] overlayfs: idmapped layers are currently not supported
	[Nov24 03:59] overlayfs: idmapped layers are currently not supported
	[ +28.164324] overlayfs: idmapped layers are currently not supported
	[Nov24 04:01] overlayfs: idmapped layers are currently not supported
	[Nov24 04:02] overlayfs: idmapped layers are currently not supported
	[Nov24 04:04] overlayfs: idmapped layers are currently not supported
	[Nov24 04:05] overlayfs: idmapped layers are currently not supported
	[Nov24 04:06] overlayfs: idmapped layers are currently not supported
	[Nov24 04:08] overlayfs: idmapped layers are currently not supported
	[Nov24 04:10] overlayfs: idmapped layers are currently not supported
	[Nov24 04:11] overlayfs: idmapped layers are currently not supported
	[ +23.918932] overlayfs: idmapped layers are currently not supported
	[Nov24 04:12] overlayfs: idmapped layers are currently not supported
	[ +35.202347] overlayfs: idmapped layers are currently not supported
	[Nov24 04:13] overlayfs: idmapped layers are currently not supported
	[Nov24 04:15] overlayfs: idmapped layers are currently not supported
	[ +47.476343] overlayfs: idmapped layers are currently not supported
	[Nov24 04:16] overlayfs: idmapped layers are currently not supported
	[Nov24 04:17] overlayfs: idmapped layers are currently not supported
	[Nov24 04:18] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [46b464c3ef546ad426e20a096b6d507622c061f58b20e93dcb5f51f5429e5a56] <==
	{"level":"warn","ts":"2025-11-24T04:17:27.286669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.307008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.332749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.363380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.406041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.427003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.451548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.463133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.480657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.523064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.551458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.575460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.580504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.604862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.619941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.640330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.662603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.712713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.731271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.742880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.758628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.801910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.812013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.832364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:17:27.912873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54176","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 04:18:29 up  3:00,  0 user,  load average: 3.16, 3.33, 2.85
	Linux embed-certs-520529 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cb80ca0ac5438e0cbc64a217d24df56f63a755bd503a1dfd46fc74505c3a9a6a] <==
	I1124 04:17:30.860895       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 04:17:30.862717       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 04:17:30.862913       1 main.go:148] setting mtu 1500 for CNI 
	I1124 04:17:30.862956       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 04:17:30.863005       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T04:17:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 04:17:31.115020       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 04:17:31.115128       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 04:17:31.115173       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 04:17:31.115842       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 04:18:01.116827       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 04:18:01.116948       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 04:18:01.117038       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 04:18:01.117118       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1124 04:18:02.716377       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 04:18:02.716509       1 metrics.go:72] Registering metrics
	I1124 04:18:02.716686       1 controller.go:711] "Syncing nftables rules"
	I1124 04:18:11.045253       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 04:18:11.045428       1 main.go:301] handling current node
	I1124 04:18:21.049679       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 04:18:21.049715       1 main.go:301] handling current node
	
	
	==> kube-apiserver [88db140510be739f963482f2996de33b78a17e5b533d83b82a40f234765849dd] <==
	I1124 04:17:29.574694       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 04:17:29.574998       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 04:17:29.575011       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 04:17:29.575186       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 04:17:29.575626       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 04:17:29.576396       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 04:17:29.580431       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 04:17:29.588434       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 04:17:29.588518       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 04:17:29.596312       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 04:17:29.610692       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 04:17:29.621389       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1124 04:17:29.622004       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 04:17:29.666627       1 cache.go:39] Caches are synced for autoregister controller
	I1124 04:17:30.086505       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 04:17:30.191352       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 04:17:30.358978       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 04:17:30.615985       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 04:17:30.800293       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 04:17:30.852881       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 04:17:31.036934       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.87.50"}
	I1124 04:17:31.091937       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.170.17"}
	I1124 04:17:32.861822       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 04:17:32.981785       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 04:17:33.329768       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [8ecff8f50d3922e20ff3b13ee70cfc72ccd41cc0050330e8fa59fb1fd12b3749] <==
	I1124 04:17:32.862240       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 04:17:32.862353       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 04:17:32.862445       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-520529"
	I1124 04:17:32.862541       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 04:17:32.864671       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 04:17:32.865905       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 04:17:32.869182       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 04:17:32.871840       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 04:17:32.872283       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 04:17:32.874666       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 04:17:32.874983       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 04:17:32.875042       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 04:17:32.875080       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 04:17:32.875104       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 04:17:32.877140       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 04:17:32.879392       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 04:17:32.889752       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 04:17:32.889907       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 04:17:32.894565       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 04:17:32.903241       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 04:17:32.903271       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 04:17:32.903280       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 04:17:32.911615       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 04:17:32.915952       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 04:17:32.937991       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [a71adf18e73dd3877d49c754226be539d6ccecca0c8d845a84e7cc52f36eebe7] <==
	I1124 04:17:31.286552       1 server_linux.go:53] "Using iptables proxy"
	I1124 04:17:31.481315       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 04:17:31.582509       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 04:17:31.582624       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 04:17:31.582784       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 04:17:31.680504       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 04:17:31.680624       1 server_linux.go:132] "Using iptables Proxier"
	I1124 04:17:31.684504       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 04:17:31.684942       1 server.go:527] "Version info" version="v1.34.1"
	I1124 04:17:31.685220       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:17:31.686780       1 config.go:200] "Starting service config controller"
	I1124 04:17:31.686834       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 04:17:31.686883       1 config.go:106] "Starting endpoint slice config controller"
	I1124 04:17:31.686909       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 04:17:31.686953       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 04:17:31.686979       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 04:17:31.687658       1 config.go:309] "Starting node config controller"
	I1124 04:17:31.690081       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 04:17:31.690140       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 04:17:31.787166       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 04:17:31.787288       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 04:17:31.787321       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [dbe92e95274246f2a0d7b1498caff07e467f7316997bfdfb9d6b5eb74f4a8db9] <==
	I1124 04:17:31.430095       1 serving.go:386] Generated self-signed cert in-memory
	I1124 04:17:32.862998       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 04:17:32.863034       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:17:32.872428       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 04:17:32.872520       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 04:17:32.872546       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 04:17:32.872594       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 04:17:32.876106       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 04:17:32.876133       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 04:17:32.876153       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 04:17:32.876159       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 04:17:32.972637       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1124 04:17:32.976786       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 04:17:32.976900       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 04:17:33 embed-certs-520529 kubelet[793]: I1124 04:17:33.514075     793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq2df\" (UniqueName: \"kubernetes.io/projected/532d8426-c95e-41c5-9b89-a994820a332b-kube-api-access-gq2df\") pod \"kubernetes-dashboard-855c9754f9-ddq4w\" (UID: \"532d8426-c95e-41c5-9b89-a994820a332b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ddq4w"
	Nov 24 04:17:33 embed-certs-520529 kubelet[793]: I1124 04:17:33.514114     793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prjqn\" (UniqueName: \"kubernetes.io/projected/e44cda19-b8ea-4f37-8228-6beb1d6474b5-kube-api-access-prjqn\") pod \"dashboard-metrics-scraper-6ffb444bf9-rckpj\" (UID: \"e44cda19-b8ea-4f37-8228-6beb1d6474b5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rckpj"
	Nov 24 04:17:33 embed-certs-520529 kubelet[793]: I1124 04:17:33.514143     793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e44cda19-b8ea-4f37-8228-6beb1d6474b5-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-rckpj\" (UID: \"e44cda19-b8ea-4f37-8228-6beb1d6474b5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rckpj"
	Nov 24 04:17:33 embed-certs-520529 kubelet[793]: W1124 04:17:33.779492     793 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8a3eb121088a2884f162896a1fbffe937d27ff6bf1c385a43a0e9edc1839c5eb/crio-2836bb8d9da8440e237f5116d6c3a2bb34af4bff7193754a24c351039ddfb9f0 WatchSource:0}: Error finding container 2836bb8d9da8440e237f5116d6c3a2bb34af4bff7193754a24c351039ddfb9f0: Status 404 returned error can't find the container with id 2836bb8d9da8440e237f5116d6c3a2bb34af4bff7193754a24c351039ddfb9f0
	Nov 24 04:17:33 embed-certs-520529 kubelet[793]: W1124 04:17:33.795586     793 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8a3eb121088a2884f162896a1fbffe937d27ff6bf1c385a43a0e9edc1839c5eb/crio-ee17000730b0978eb3ce03dc51b839d5cc96e0553b54b4361b715e16cfb5d392 WatchSource:0}: Error finding container ee17000730b0978eb3ce03dc51b839d5cc96e0553b54b4361b715e16cfb5d392: Status 404 returned error can't find the container with id ee17000730b0978eb3ce03dc51b839d5cc96e0553b54b4361b715e16cfb5d392
	Nov 24 04:17:39 embed-certs-520529 kubelet[793]: I1124 04:17:39.256350     793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 24 04:17:44 embed-certs-520529 kubelet[793]: I1124 04:17:44.086918     793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ddq4w" podStartSLOduration=6.204109116 podStartE2EDuration="11.086896359s" podCreationTimestamp="2025-11-24 04:17:33 +0000 UTC" firstStartedPulling="2025-11-24 04:17:33.783296652 +0000 UTC m=+9.817445829" lastFinishedPulling="2025-11-24 04:17:38.666083895 +0000 UTC m=+14.700233072" observedRunningTime="2025-11-24 04:17:39.331515314 +0000 UTC m=+15.365664558" watchObservedRunningTime="2025-11-24 04:17:44.086896359 +0000 UTC m=+20.121045545"
	Nov 24 04:17:45 embed-certs-520529 kubelet[793]: I1124 04:17:45.339920     793 scope.go:117] "RemoveContainer" containerID="3d685f5f9bf15d8ae778347ddcd7240abc2e58579d9a2875b4764f0f9aef5ac3"
	Nov 24 04:17:46 embed-certs-520529 kubelet[793]: I1124 04:17:46.347721     793 scope.go:117] "RemoveContainer" containerID="3d685f5f9bf15d8ae778347ddcd7240abc2e58579d9a2875b4764f0f9aef5ac3"
	Nov 24 04:17:46 embed-certs-520529 kubelet[793]: I1124 04:17:46.348537     793 scope.go:117] "RemoveContainer" containerID="1b62b13a80b00219150218cf41a1b4d2a27a862fe743aa65e032a1608c19f5ac"
	Nov 24 04:17:46 embed-certs-520529 kubelet[793]: E1124 04:17:46.348878     793 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rckpj_kubernetes-dashboard(e44cda19-b8ea-4f37-8228-6beb1d6474b5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rckpj" podUID="e44cda19-b8ea-4f37-8228-6beb1d6474b5"
	Nov 24 04:17:47 embed-certs-520529 kubelet[793]: I1124 04:17:47.356912     793 scope.go:117] "RemoveContainer" containerID="1b62b13a80b00219150218cf41a1b4d2a27a862fe743aa65e032a1608c19f5ac"
	Nov 24 04:17:47 embed-certs-520529 kubelet[793]: E1124 04:17:47.357066     793 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rckpj_kubernetes-dashboard(e44cda19-b8ea-4f37-8228-6beb1d6474b5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rckpj" podUID="e44cda19-b8ea-4f37-8228-6beb1d6474b5"
	Nov 24 04:17:54 embed-certs-520529 kubelet[793]: I1124 04:17:54.667506     793 scope.go:117] "RemoveContainer" containerID="1b62b13a80b00219150218cf41a1b4d2a27a862fe743aa65e032a1608c19f5ac"
	Nov 24 04:17:54 embed-certs-520529 kubelet[793]: E1124 04:17:54.668222     793 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rckpj_kubernetes-dashboard(e44cda19-b8ea-4f37-8228-6beb1d6474b5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rckpj" podUID="e44cda19-b8ea-4f37-8228-6beb1d6474b5"
	Nov 24 04:18:01 embed-certs-520529 kubelet[793]: I1124 04:18:01.395030     793 scope.go:117] "RemoveContainer" containerID="3201ebcdcd96c85d6ccc8935814a307b94f8cb6caa93667464e51dc85132e068"
	Nov 24 04:18:07 embed-certs-520529 kubelet[793]: I1124 04:18:07.229235     793 scope.go:117] "RemoveContainer" containerID="1b62b13a80b00219150218cf41a1b4d2a27a862fe743aa65e032a1608c19f5ac"
	Nov 24 04:18:07 embed-certs-520529 kubelet[793]: I1124 04:18:07.414052     793 scope.go:117] "RemoveContainer" containerID="1b62b13a80b00219150218cf41a1b4d2a27a862fe743aa65e032a1608c19f5ac"
	Nov 24 04:18:07 embed-certs-520529 kubelet[793]: I1124 04:18:07.414566     793 scope.go:117] "RemoveContainer" containerID="13b95fe1883b52b2af09a03014debb9c88264e08051cf4f73c66109c0d914123"
	Nov 24 04:18:07 embed-certs-520529 kubelet[793]: E1124 04:18:07.414837     793 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rckpj_kubernetes-dashboard(e44cda19-b8ea-4f37-8228-6beb1d6474b5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rckpj" podUID="e44cda19-b8ea-4f37-8228-6beb1d6474b5"
	Nov 24 04:18:14 embed-certs-520529 kubelet[793]: I1124 04:18:14.667859     793 scope.go:117] "RemoveContainer" containerID="13b95fe1883b52b2af09a03014debb9c88264e08051cf4f73c66109c0d914123"
	Nov 24 04:18:14 embed-certs-520529 kubelet[793]: E1124 04:18:14.668482     793 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rckpj_kubernetes-dashboard(e44cda19-b8ea-4f37-8228-6beb1d6474b5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rckpj" podUID="e44cda19-b8ea-4f37-8228-6beb1d6474b5"
	Nov 24 04:18:23 embed-certs-520529 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 04:18:23 embed-certs-520529 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 04:18:23 embed-certs-520529 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [997f8bac617eec8cefe694cb39fb8f8ea3728aa8ff4e30ca40e239b9ab5d2a8a] <==
	2025/11/24 04:17:38 Starting overwatch
	2025/11/24 04:17:38 Using namespace: kubernetes-dashboard
	2025/11/24 04:17:38 Using in-cluster config to connect to apiserver
	2025/11/24 04:17:38 Using secret token for csrf signing
	2025/11/24 04:17:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 04:17:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 04:17:38 Successful initial request to the apiserver, version: v1.34.1
	2025/11/24 04:17:38 Generating JWE encryption key
	2025/11/24 04:17:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 04:17:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 04:17:39 Initializing JWE encryption key from synchronized object
	2025/11/24 04:17:39 Creating in-cluster Sidecar client
	2025/11/24 04:17:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 04:17:39 Serving insecurely on HTTP port: 9090
	2025/11/24 04:18:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [3201ebcdcd96c85d6ccc8935814a307b94f8cb6caa93667464e51dc85132e068] <==
	I1124 04:17:31.129800       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 04:18:01.173049       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [32ff3b0eef3a48557f3abf7a60a0b3a38e475c5ff365fe65364698c01cc51e5c] <==
	I1124 04:18:01.530824       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 04:18:01.531018       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 04:18:01.536350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:04.991805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:09.253059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:12.852617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:15.906582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:18.928825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:18.934406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 04:18:18.934935       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 04:18:18.935157       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-520529_1f9f25e8-bdc7-4235-8eae-352974b3dc75!
	I1124 04:18:18.945988       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ee581ba2-d5b1-413b-ba36-b573eee08872", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-520529_1f9f25e8-bdc7-4235-8eae-352974b3dc75 became leader
	W1124 04:18:18.949282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:18.961577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 04:18:19.038535       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-520529_1f9f25e8-bdc7-4235-8eae-352974b3dc75!
	W1124 04:18:20.964988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:20.971479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:22.975535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:22.986975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:24.990019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:25.005509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:27.014059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:27.019879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:29.023642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:18:29.032261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-520529 -n embed-certs-520529
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-520529 -n embed-certs-520529: exit status 2 (361.571032ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-520529 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-543467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-543467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (264.790679ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:19:12Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-543467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-543467
helpers_test.go:243: (dbg) docker inspect newest-cni-543467:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d5de64ccb4ee82439c3eb50ca5c7d34e9cf9c4369188379e453a3607df8d63aa",
	        "Created": "2025-11-24T04:18:39.041842209Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 495252,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T04:18:39.110811218Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fbb44bc62521f331457dff002aaa5e1e27856f9e53853b3b3ee62969be454028",
	        "ResolvConfPath": "/var/lib/docker/containers/d5de64ccb4ee82439c3eb50ca5c7d34e9cf9c4369188379e453a3607df8d63aa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d5de64ccb4ee82439c3eb50ca5c7d34e9cf9c4369188379e453a3607df8d63aa/hostname",
	        "HostsPath": "/var/lib/docker/containers/d5de64ccb4ee82439c3eb50ca5c7d34e9cf9c4369188379e453a3607df8d63aa/hosts",
	        "LogPath": "/var/lib/docker/containers/d5de64ccb4ee82439c3eb50ca5c7d34e9cf9c4369188379e453a3607df8d63aa/d5de64ccb4ee82439c3eb50ca5c7d34e9cf9c4369188379e453a3607df8d63aa-json.log",
	        "Name": "/newest-cni-543467",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-543467:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-543467",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d5de64ccb4ee82439c3eb50ca5c7d34e9cf9c4369188379e453a3607df8d63aa",
	                "LowerDir": "/var/lib/docker/overlay2/508f75bd78cd9ee664b18d9c770c9f2ff20973534449594a6f1b58570079d85b-init/diff:/var/lib/docker/overlay2/d0aa28be488ed1454a92ae6ce1d851cbc3e880fd8a322d63788b1835a346ec13/diff",
	                "MergedDir": "/var/lib/docker/overlay2/508f75bd78cd9ee664b18d9c770c9f2ff20973534449594a6f1b58570079d85b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/508f75bd78cd9ee664b18d9c770c9f2ff20973534449594a6f1b58570079d85b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/508f75bd78cd9ee664b18d9c770c9f2ff20973534449594a6f1b58570079d85b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-543467",
	                "Source": "/var/lib/docker/volumes/newest-cni-543467/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-543467",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-543467",
	                "name.minikube.sigs.k8s.io": "newest-cni-543467",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "85df4e240c7b23b582cec36e43bde159466ecc98e083f092ac3ce6a9d48d5650",
	            "SandboxKey": "/var/run/docker/netns/85df4e240c7b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-543467": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:d8:e5:24:6b:7a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fbc2fa8442ac0221bba9fd37174f7543e2a4c35cf01fdb513ae8d608db3a956a",
	                    "EndpointID": "dc3315187e37492d6efa97f8b5f806cd98cd5d586171c23c237dcf57fc46c5e5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-543467",
	                        "d5de64ccb4ee"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-543467 -n newest-cni-543467
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-543467 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-543467 logs -n 25: (1.135635665s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-762702                                                                                                                                                                                                                     │ old-k8s-version-762702       │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:14 UTC │
	│ start   │ -p no-preload-600301 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:16 UTC │
	│ start   │ -p cert-expiration-918798 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-918798       │ jenkins │ v1.37.0 │ 24 Nov 25 04:14 UTC │ 24 Nov 25 04:15 UTC │
	│ delete  │ -p cert-expiration-918798                                                                                                                                                                                                                     │ cert-expiration-918798       │ jenkins │ v1.37.0 │ 24 Nov 25 04:15 UTC │ 24 Nov 25 04:15 UTC │
	│ start   │ -p embed-certs-520529 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:15 UTC │ 24 Nov 25 04:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-600301 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │                     │
	│ stop    │ -p no-preload-600301 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │ 24 Nov 25 04:16 UTC │
	│ addons  │ enable dashboard -p no-preload-600301 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │ 24 Nov 25 04:16 UTC │
	│ start   │ -p no-preload-600301 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │ 24 Nov 25 04:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-520529 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │                     │
	│ stop    │ -p embed-certs-520529 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-520529 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ start   │ -p embed-certs-520529 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:18 UTC │
	│ image   │ no-preload-600301 image list --format=json                                                                                                                                                                                                    │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ pause   │ -p no-preload-600301 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │                     │
	│ delete  │ -p no-preload-600301                                                                                                                                                                                                                          │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ delete  │ -p no-preload-600301                                                                                                                                                                                                                          │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ delete  │ -p disable-driver-mounts-995056                                                                                                                                                                                                               │ disable-driver-mounts-995056 │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ start   │ -p default-k8s-diff-port-303179 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-303179 │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:19 UTC │
	│ image   │ embed-certs-520529 image list --format=json                                                                                                                                                                                                   │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │ 24 Nov 25 04:18 UTC │
	│ pause   │ -p embed-certs-520529 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │                     │
	│ delete  │ -p embed-certs-520529                                                                                                                                                                                                                         │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │ 24 Nov 25 04:18 UTC │
	│ delete  │ -p embed-certs-520529                                                                                                                                                                                                                         │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │ 24 Nov 25 04:18 UTC │
	│ start   │ -p newest-cni-543467 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │ 24 Nov 25 04:19 UTC │
	│ addons  │ enable metrics-server -p newest-cni-543467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 04:18:33
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 04:18:33.612520  494861 out.go:360] Setting OutFile to fd 1 ...
	I1124 04:18:33.612711  494861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:18:33.612738  494861 out.go:374] Setting ErrFile to fd 2...
	I1124 04:18:33.612758  494861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:18:33.613042  494861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 04:18:33.613494  494861 out.go:368] Setting JSON to false
	I1124 04:18:33.614513  494861 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10843,"bootTime":1763947071,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 04:18:33.614608  494861 start.go:143] virtualization:  
	I1124 04:18:33.618252  494861 out.go:179] * [newest-cni-543467] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 04:18:33.622092  494861 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 04:18:33.622181  494861 notify.go:221] Checking for updates...
	I1124 04:18:33.628004  494861 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 04:18:33.630913  494861 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:18:33.633757  494861 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	I1124 04:18:33.636654  494861 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 04:18:33.639522  494861 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 04:18:33.643094  494861 config.go:182] Loaded profile config "default-k8s-diff-port-303179": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:18:33.643205  494861 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 04:18:33.676014  494861 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 04:18:33.676146  494861 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:18:33.734574  494861 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 04:18:33.72369233 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:18:33.734690  494861 docker.go:319] overlay module found
	I1124 04:18:33.737820  494861 out.go:179] * Using the docker driver based on user configuration
	I1124 04:18:33.740680  494861 start.go:309] selected driver: docker
	I1124 04:18:33.740701  494861 start.go:927] validating driver "docker" against <nil>
	I1124 04:18:33.740715  494861 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 04:18:33.741774  494861 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:18:33.795082  494861 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 04:18:33.78579912 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:18:33.795240  494861 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1124 04:18:33.795265  494861 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1124 04:18:33.795502  494861 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 04:18:33.798640  494861 out.go:179] * Using Docker driver with root privileges
	I1124 04:18:33.801511  494861 cni.go:84] Creating CNI manager for ""
	I1124 04:18:33.801586  494861 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:18:33.801599  494861 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 04:18:33.801693  494861 start.go:353] cluster config:
	{Name:newest-cni-543467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-543467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:18:33.806641  494861 out.go:179] * Starting "newest-cni-543467" primary control-plane node in "newest-cni-543467" cluster
	I1124 04:18:33.809439  494861 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 04:18:33.812331  494861 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 04:18:33.815461  494861 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 04:18:33.815573  494861 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:18:33.815606  494861 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 04:18:33.815619  494861 cache.go:65] Caching tarball of preloaded images
	I1124 04:18:33.815706  494861 preload.go:238] Found /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 04:18:33.815718  494861 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 04:18:33.815826  494861 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/config.json ...
	I1124 04:18:33.815845  494861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/config.json: {Name:mk5b639a23465464b51e316ee6f246211d37fdaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:18:33.844674  494861 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 04:18:33.844701  494861 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 04:18:33.844737  494861 cache.go:243] Successfully downloaded all kic artifacts
	I1124 04:18:33.844778  494861 start.go:360] acquireMachinesLock for newest-cni-543467: {Name:mk49235894ca4bdab744b09877359a6e0584cafb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 04:18:33.844914  494861 start.go:364] duration metric: took 105.47µs to acquireMachinesLock for "newest-cni-543467"
	I1124 04:18:33.844964  494861 start.go:93] Provisioning new machine with config: &{Name:newest-cni-543467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-543467 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 04:18:33.845070  494861 start.go:125] createHost starting for "" (driver="docker")
	W1124 04:18:32.953497  490948 node_ready.go:57] node "default-k8s-diff-port-303179" has "Ready":"False" status (will retry)
	W1124 04:18:35.452261  490948 node_ready.go:57] node "default-k8s-diff-port-303179" has "Ready":"False" status (will retry)
	I1124 04:18:33.848830  494861 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 04:18:33.849118  494861 start.go:159] libmachine.API.Create for "newest-cni-543467" (driver="docker")
	I1124 04:18:33.849164  494861 client.go:173] LocalClient.Create starting
	I1124 04:18:33.849252  494861 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem
	I1124 04:18:33.849327  494861 main.go:143] libmachine: Decoding PEM data...
	I1124 04:18:33.849350  494861 main.go:143] libmachine: Parsing certificate...
	I1124 04:18:33.849412  494861 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem
	I1124 04:18:33.849436  494861 main.go:143] libmachine: Decoding PEM data...
	I1124 04:18:33.849448  494861 main.go:143] libmachine: Parsing certificate...
	I1124 04:18:33.849819  494861 cli_runner.go:164] Run: docker network inspect newest-cni-543467 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 04:18:33.867713  494861 cli_runner.go:211] docker network inspect newest-cni-543467 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 04:18:33.867799  494861 network_create.go:284] running [docker network inspect newest-cni-543467] to gather additional debugging logs...
	I1124 04:18:33.867820  494861 cli_runner.go:164] Run: docker network inspect newest-cni-543467
	W1124 04:18:33.885004  494861 cli_runner.go:211] docker network inspect newest-cni-543467 returned with exit code 1
	I1124 04:18:33.885038  494861 network_create.go:287] error running [docker network inspect newest-cni-543467]: docker network inspect newest-cni-543467: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-543467 not found
	I1124 04:18:33.885053  494861 network_create.go:289] output of [docker network inspect newest-cni-543467]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-543467 not found
	
	** /stderr **
	I1124 04:18:33.885166  494861 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 04:18:33.901705  494861 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-740fb099fccc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:52:7a:9c:b0:4d:41} reservation:<nil>}
	I1124 04:18:33.902093  494861 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4b0f25a7c590 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:53:b3:a1:55:1a} reservation:<nil>}
	I1124 04:18:33.902320  494861 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c1d995330d2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:83:d9:0c:83:10} reservation:<nil>}
	I1124 04:18:33.902791  494861 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400199b270}
	I1124 04:18:33.902816  494861 network_create.go:124] attempt to create docker network newest-cni-543467 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1124 04:18:33.902880  494861 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-543467 newest-cni-543467
	I1124 04:18:33.969423  494861 network_create.go:108] docker network newest-cni-543467 192.168.76.0/24 created
	I1124 04:18:33.969457  494861 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-543467" container
	I1124 04:18:33.969533  494861 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 04:18:33.985560  494861 cli_runner.go:164] Run: docker volume create newest-cni-543467 --label name.minikube.sigs.k8s.io=newest-cni-543467 --label created_by.minikube.sigs.k8s.io=true
	I1124 04:18:34.006794  494861 oci.go:103] Successfully created a docker volume newest-cni-543467
	I1124 04:18:34.006893  494861 cli_runner.go:164] Run: docker run --rm --name newest-cni-543467-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-543467 --entrypoint /usr/bin/test -v newest-cni-543467:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 04:18:34.566844  494861 oci.go:107] Successfully prepared a docker volume newest-cni-543467
	I1124 04:18:34.566914  494861 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:18:34.566925  494861 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 04:18:34.566993  494861 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-543467:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	W1124 04:18:37.452355  490948 node_ready.go:57] node "default-k8s-diff-port-303179" has "Ready":"False" status (will retry)
	W1124 04:18:39.456483  490948 node_ready.go:57] node "default-k8s-diff-port-303179" has "Ready":"False" status (will retry)
	I1124 04:18:38.971446  494861 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-543467:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (4.404413344s)
	I1124 04:18:38.971483  494861 kic.go:203] duration metric: took 4.404554687s to extract preloaded images to volume ...
	W1124 04:18:38.971637  494861 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 04:18:38.971744  494861 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 04:18:39.027020  494861 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-543467 --name newest-cni-543467 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-543467 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-543467 --network newest-cni-543467 --ip 192.168.76.2 --volume newest-cni-543467:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 04:18:39.334614  494861 cli_runner.go:164] Run: docker container inspect newest-cni-543467 --format={{.State.Running}}
	I1124 04:18:39.356529  494861 cli_runner.go:164] Run: docker container inspect newest-cni-543467 --format={{.State.Status}}
	I1124 04:18:39.386909  494861 cli_runner.go:164] Run: docker exec newest-cni-543467 stat /var/lib/dpkg/alternatives/iptables
	I1124 04:18:39.453571  494861 oci.go:144] the created container "newest-cni-543467" has a running status.
	I1124 04:18:39.453597  494861 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-289526/.minikube/machines/newest-cni-543467/id_rsa...
	I1124 04:18:39.644988  494861 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-289526/.minikube/machines/newest-cni-543467/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 04:18:39.669082  494861 cli_runner.go:164] Run: docker container inspect newest-cni-543467 --format={{.State.Status}}
	I1124 04:18:39.691569  494861 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 04:18:39.691589  494861 kic_runner.go:114] Args: [docker exec --privileged newest-cni-543467 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 04:18:39.756886  494861 cli_runner.go:164] Run: docker container inspect newest-cni-543467 --format={{.State.Status}}
	I1124 04:18:39.785035  494861 machine.go:94] provisionDockerMachine start ...
	I1124 04:18:39.785121  494861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-543467
	I1124 04:18:39.812142  494861 main.go:143] libmachine: Using SSH client type: native
	I1124 04:18:39.812523  494861 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1124 04:18:39.812540  494861 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 04:18:39.813121  494861 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35584->127.0.0.1:33456: read: connection reset by peer
	I1124 04:18:42.966432  494861 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-543467
	
	I1124 04:18:42.966481  494861 ubuntu.go:182] provisioning hostname "newest-cni-543467"
	I1124 04:18:42.966549  494861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-543467
	I1124 04:18:42.984615  494861 main.go:143] libmachine: Using SSH client type: native
	I1124 04:18:42.984946  494861 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1124 04:18:42.984963  494861 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-543467 && echo "newest-cni-543467" | sudo tee /etc/hostname
	I1124 04:18:43.148886  494861 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-543467
	
	I1124 04:18:43.148982  494861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-543467
	I1124 04:18:43.168512  494861 main.go:143] libmachine: Using SSH client type: native
	I1124 04:18:43.168861  494861 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1124 04:18:43.168878  494861 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-543467' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-543467/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-543467' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 04:18:43.314654  494861 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 04:18:43.314683  494861 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-289526/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-289526/.minikube}
	I1124 04:18:43.314712  494861 ubuntu.go:190] setting up certificates
	I1124 04:18:43.314728  494861 provision.go:84] configureAuth start
	I1124 04:18:43.314788  494861 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-543467
	I1124 04:18:43.332626  494861 provision.go:143] copyHostCerts
	I1124 04:18:43.332706  494861 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem, removing ...
	I1124 04:18:43.332721  494861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem
	I1124 04:18:43.332864  494861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem (1082 bytes)
	I1124 04:18:43.332980  494861 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem, removing ...
	I1124 04:18:43.332993  494861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem
	I1124 04:18:43.333025  494861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem (1123 bytes)
	I1124 04:18:43.333082  494861 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem, removing ...
	I1124 04:18:43.333091  494861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem
	I1124 04:18:43.333114  494861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem (1675 bytes)
	I1124 04:18:43.333208  494861 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem org=jenkins.newest-cni-543467 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-543467]
	W1124 04:18:41.951539  490948 node_ready.go:57] node "default-k8s-diff-port-303179" has "Ready":"False" status (will retry)
	W1124 04:18:43.952097  490948 node_ready.go:57] node "default-k8s-diff-port-303179" has "Ready":"False" status (will retry)
	I1124 04:18:44.157815  494861 provision.go:177] copyRemoteCerts
	I1124 04:18:44.157890  494861 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 04:18:44.157946  494861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-543467
	I1124 04:18:44.177110  494861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/newest-cni-543467/id_rsa Username:docker}
	I1124 04:18:44.282403  494861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 04:18:44.300673  494861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 04:18:44.321135  494861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 04:18:44.338975  494861 provision.go:87] duration metric: took 1.024223182s to configureAuth
	I1124 04:18:44.339049  494861 ubuntu.go:206] setting minikube options for container-runtime
	I1124 04:18:44.339279  494861 config.go:182] Loaded profile config "newest-cni-543467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:18:44.339390  494861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-543467
	I1124 04:18:44.357069  494861 main.go:143] libmachine: Using SSH client type: native
	I1124 04:18:44.357385  494861 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1124 04:18:44.357404  494861 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 04:18:44.659261  494861 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 04:18:44.659286  494861 machine.go:97] duration metric: took 4.874229854s to provisionDockerMachine
	I1124 04:18:44.659297  494861 client.go:176] duration metric: took 10.810122736s to LocalClient.Create
	I1124 04:18:44.659308  494861 start.go:167] duration metric: took 10.810192538s to libmachine.API.Create "newest-cni-543467"
	I1124 04:18:44.659315  494861 start.go:293] postStartSetup for "newest-cni-543467" (driver="docker")
	I1124 04:18:44.659325  494861 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 04:18:44.659394  494861 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 04:18:44.659444  494861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-543467
	I1124 04:18:44.677460  494861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/newest-cni-543467/id_rsa Username:docker}
	I1124 04:18:44.782805  494861 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 04:18:44.786274  494861 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 04:18:44.786306  494861 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 04:18:44.786318  494861 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/addons for local assets ...
	I1124 04:18:44.786378  494861 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/files for local assets ...
	I1124 04:18:44.786506  494861 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem -> 2913892.pem in /etc/ssl/certs
	I1124 04:18:44.786629  494861 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 04:18:44.794960  494861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:18:44.813622  494861 start.go:296] duration metric: took 154.291989ms for postStartSetup
	I1124 04:18:44.813994  494861 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-543467
	I1124 04:18:44.830054  494861 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/config.json ...
	I1124 04:18:44.830439  494861 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 04:18:44.830530  494861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-543467
	I1124 04:18:44.859819  494861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/newest-cni-543467/id_rsa Username:docker}
	I1124 04:18:44.967558  494861 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 04:18:44.972119  494861 start.go:128] duration metric: took 11.127033889s to createHost
	I1124 04:18:44.972145  494861 start.go:83] releasing machines lock for "newest-cni-543467", held for 11.127210966s
	I1124 04:18:44.972227  494861 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-543467
	I1124 04:18:44.990817  494861 ssh_runner.go:195] Run: cat /version.json
	I1124 04:18:44.990868  494861 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 04:18:44.990955  494861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-543467
	I1124 04:18:44.990872  494861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-543467
	I1124 04:18:45.037520  494861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/newest-cni-543467/id_rsa Username:docker}
	I1124 04:18:45.074167  494861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/newest-cni-543467/id_rsa Username:docker}
	I1124 04:18:45.220213  494861 ssh_runner.go:195] Run: systemctl --version
	I1124 04:18:45.331872  494861 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 04:18:45.370485  494861 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 04:18:45.375461  494861 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 04:18:45.375591  494861 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 04:18:45.404843  494861 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 04:18:45.404868  494861 start.go:496] detecting cgroup driver to use...
	I1124 04:18:45.404901  494861 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 04:18:45.404949  494861 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 04:18:45.424421  494861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 04:18:45.438064  494861 docker.go:218] disabling cri-docker service (if available) ...
	I1124 04:18:45.438126  494861 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 04:18:45.457426  494861 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 04:18:45.477461  494861 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 04:18:45.604108  494861 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 04:18:45.734502  494861 docker.go:234] disabling docker service ...
	I1124 04:18:45.734600  494861 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 04:18:45.756662  494861 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 04:18:45.770285  494861 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 04:18:45.890111  494861 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 04:18:46.024310  494861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 04:18:46.039921  494861 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 04:18:46.055385  494861 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 04:18:46.055515  494861 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:18:46.065563  494861 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 04:18:46.065637  494861 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:18:46.075258  494861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:18:46.084723  494861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:18:46.094049  494861 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 04:18:46.102426  494861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:18:46.111706  494861 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:18:46.125283  494861 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:18:46.134984  494861 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 04:18:46.143381  494861 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 04:18:46.151612  494861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:18:46.270415  494861 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 04:18:46.446494  494861 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 04:18:46.446630  494861 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 04:18:46.452834  494861 start.go:564] Will wait 60s for crictl version
	I1124 04:18:46.452943  494861 ssh_runner.go:195] Run: which crictl
	I1124 04:18:46.456726  494861 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 04:18:46.484064  494861 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 04:18:46.484193  494861 ssh_runner.go:195] Run: crio --version
	I1124 04:18:46.514276  494861 ssh_runner.go:195] Run: crio --version
	I1124 04:18:46.545930  494861 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 04:18:46.548502  494861 cli_runner.go:164] Run: docker network inspect newest-cni-543467 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 04:18:46.565952  494861 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 04:18:46.569863  494861 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:18:46.583919  494861 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1124 04:18:46.586635  494861 kubeadm.go:884] updating cluster {Name:newest-cni-543467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-543467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 04:18:46.586806  494861 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:18:46.586881  494861 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:18:46.620877  494861 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:18:46.620903  494861 crio.go:433] Images already preloaded, skipping extraction
	I1124 04:18:46.620984  494861 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:18:46.651129  494861 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:18:46.651154  494861 cache_images.go:86] Images are preloaded, skipping loading
	I1124 04:18:46.651162  494861 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1124 04:18:46.651267  494861 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-543467 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-543467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 04:18:46.651348  494861 ssh_runner.go:195] Run: crio config
	I1124 04:18:46.718257  494861 cni.go:84] Creating CNI manager for ""
	I1124 04:18:46.718321  494861 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:18:46.718360  494861 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1124 04:18:46.718399  494861 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-543467 NodeName:newest-cni-543467 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 04:18:46.718568  494861 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-543467"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 04:18:46.718665  494861 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 04:18:46.726593  494861 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 04:18:46.726704  494861 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 04:18:46.734437  494861 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 04:18:46.747736  494861 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 04:18:46.761341  494861 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1124 04:18:46.775547  494861 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 04:18:46.779646  494861 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:18:46.790690  494861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:18:46.926087  494861 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:18:46.944926  494861 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467 for IP: 192.168.76.2
	I1124 04:18:46.944951  494861 certs.go:195] generating shared ca certs ...
	I1124 04:18:46.944974  494861 certs.go:227] acquiring lock for ca certs: {Name:mk13d1a72b5c7901cf99d88e558f62c6b8512807 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:18:46.945129  494861 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key
	I1124 04:18:46.945177  494861 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key
	I1124 04:18:46.945189  494861 certs.go:257] generating profile certs ...
	I1124 04:18:46.945244  494861 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/client.key
	I1124 04:18:46.945261  494861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/client.crt with IP's: []
	I1124 04:18:47.113043  494861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/client.crt ...
	I1124 04:18:47.113079  494861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/client.crt: {Name:mkcbaf5c7c0996a88c3f1f57d4f0aedef8cbd5e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:18:47.113312  494861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/client.key ...
	I1124 04:18:47.113330  494861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/client.key: {Name:mk99e0b5111453c6d4d75deee4d876cc72d19099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:18:47.113423  494861 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/apiserver.key.e6db7c28
	I1124 04:18:47.113444  494861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/apiserver.crt.e6db7c28 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 04:18:47.170338  494861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/apiserver.crt.e6db7c28 ...
	I1124 04:18:47.170370  494861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/apiserver.crt.e6db7c28: {Name:mk01905a0284bae2234ebc8c6cf95cdae1a75729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:18:47.170556  494861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/apiserver.key.e6db7c28 ...
	I1124 04:18:47.170572  494861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/apiserver.key.e6db7c28: {Name:mk2b4ae2c532995150d773227b0000d9f3d88b57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:18:47.170653  494861 certs.go:382] copying /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/apiserver.crt.e6db7c28 -> /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/apiserver.crt
	I1124 04:18:47.170773  494861 certs.go:386] copying /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/apiserver.key.e6db7c28 -> /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/apiserver.key
	I1124 04:18:47.170838  494861 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/proxy-client.key
	I1124 04:18:47.170856  494861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/proxy-client.crt with IP's: []
	I1124 04:18:47.401304  494861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/proxy-client.crt ...
	I1124 04:18:47.401336  494861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/proxy-client.crt: {Name:mk9c4f4aa3649b53f2e440929ce809e11764af02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:18:47.401520  494861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/proxy-client.key ...
	I1124 04:18:47.401537  494861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/proxy-client.key: {Name:mkf06281a26b80843e5b9a68e4619d824d9d38f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:18:47.401727  494861 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem (1338 bytes)
	W1124 04:18:47.401772  494861 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389_empty.pem, impossibly tiny 0 bytes
	I1124 04:18:47.401787  494861 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 04:18:47.401815  494861 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem (1082 bytes)
	I1124 04:18:47.401843  494861 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem (1123 bytes)
	I1124 04:18:47.401871  494861 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem (1675 bytes)
	I1124 04:18:47.401921  494861 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:18:47.402539  494861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 04:18:47.422714  494861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 04:18:47.443756  494861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 04:18:47.462714  494861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 04:18:47.480842  494861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 04:18:47.500878  494861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 04:18:47.519068  494861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 04:18:47.538004  494861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 04:18:47.554831  494861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem --> /usr/share/ca-certificates/291389.pem (1338 bytes)
	I1124 04:18:47.571627  494861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /usr/share/ca-certificates/2913892.pem (1708 bytes)
	I1124 04:18:47.588967  494861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 04:18:47.607755  494861 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 04:18:47.620558  494861 ssh_runner.go:195] Run: openssl version
	I1124 04:18:47.629056  494861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291389.pem && ln -fs /usr/share/ca-certificates/291389.pem /etc/ssl/certs/291389.pem"
	I1124 04:18:47.638525  494861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291389.pem
	I1124 04:18:47.643002  494861 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 03:20 /usr/share/ca-certificates/291389.pem
	I1124 04:18:47.643146  494861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291389.pem
	I1124 04:18:47.690147  494861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291389.pem /etc/ssl/certs/51391683.0"
	I1124 04:18:47.700790  494861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2913892.pem && ln -fs /usr/share/ca-certificates/2913892.pem /etc/ssl/certs/2913892.pem"
	I1124 04:18:47.709647  494861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2913892.pem
	I1124 04:18:47.713924  494861 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 03:20 /usr/share/ca-certificates/2913892.pem
	I1124 04:18:47.713990  494861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2913892.pem
	I1124 04:18:47.755115  494861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2913892.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 04:18:47.763558  494861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 04:18:47.771836  494861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:18:47.775602  494861 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 03:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:18:47.775662  494861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:18:47.817103  494861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 04:18:47.825410  494861 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 04:18:47.829013  494861 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 04:18:47.829087  494861 kubeadm.go:401] StartCluster: {Name:newest-cni-543467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-543467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:18:47.829181  494861 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 04:18:47.829237  494861 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 04:18:47.856470  494861 cri.go:89] found id: ""
	I1124 04:18:47.856588  494861 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 04:18:47.865245  494861 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 04:18:47.873097  494861 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 04:18:47.873195  494861 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 04:18:47.881165  494861 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 04:18:47.881187  494861 kubeadm.go:158] found existing configuration files:
	
	I1124 04:18:47.881271  494861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 04:18:47.889135  494861 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 04:18:47.889256  494861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 04:18:47.896567  494861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 04:18:47.903812  494861 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 04:18:47.903892  494861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 04:18:47.916102  494861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 04:18:47.925654  494861 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 04:18:47.925719  494861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 04:18:47.933186  494861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 04:18:47.941065  494861 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 04:18:47.941180  494861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 04:18:47.948757  494861 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 04:18:48.024523  494861 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 04:18:48.024854  494861 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 04:18:48.100078  494861 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1124 04:18:46.451714  490948 node_ready.go:57] node "default-k8s-diff-port-303179" has "Ready":"False" status (will retry)
	W1124 04:18:48.452226  490948 node_ready.go:57] node "default-k8s-diff-port-303179" has "Ready":"False" status (will retry)
	W1124 04:18:50.453257  490948 node_ready.go:57] node "default-k8s-diff-port-303179" has "Ready":"False" status (will retry)
	W1124 04:18:52.952830  490948 node_ready.go:57] node "default-k8s-diff-port-303179" has "Ready":"False" status (will retry)
	W1124 04:18:55.453343  490948 node_ready.go:57] node "default-k8s-diff-port-303179" has "Ready":"False" status (will retry)
	W1124 04:18:57.953022  490948 node_ready.go:57] node "default-k8s-diff-port-303179" has "Ready":"False" status (will retry)
	W1124 04:19:00.452693  490948 node_ready.go:57] node "default-k8s-diff-port-303179" has "Ready":"False" status (will retry)
	W1124 04:19:02.951821  490948 node_ready.go:57] node "default-k8s-diff-port-303179" has "Ready":"False" status (will retry)
	W1124 04:19:05.452408  490948 node_ready.go:57] node "default-k8s-diff-port-303179" has "Ready":"False" status (will retry)
	I1124 04:19:06.068959  494861 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 04:19:06.069023  494861 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 04:19:06.069107  494861 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 04:19:06.069166  494861 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 04:19:06.069203  494861 kubeadm.go:319] OS: Linux
	I1124 04:19:06.069264  494861 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 04:19:06.069315  494861 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 04:19:06.069363  494861 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 04:19:06.069412  494861 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 04:19:06.069462  494861 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 04:19:06.069509  494861 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 04:19:06.069558  494861 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 04:19:06.069609  494861 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 04:19:06.069654  494861 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 04:19:06.069725  494861 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 04:19:06.069818  494861 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 04:19:06.069907  494861 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 04:19:06.069969  494861 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 04:19:06.073192  494861 out.go:252]   - Generating certificates and keys ...
	I1124 04:19:06.073292  494861 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 04:19:06.073359  494861 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 04:19:06.073427  494861 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 04:19:06.073485  494861 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 04:19:06.073546  494861 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 04:19:06.073598  494861 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 04:19:06.073649  494861 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 04:19:06.073763  494861 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-543467] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 04:19:06.073816  494861 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 04:19:06.073940  494861 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-543467] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 04:19:06.074007  494861 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 04:19:06.074074  494861 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 04:19:06.074121  494861 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 04:19:06.074178  494861 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 04:19:06.074228  494861 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 04:19:06.074286  494861 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 04:19:06.074342  494861 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 04:19:06.074406  494861 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 04:19:06.074514  494861 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 04:19:06.074599  494861 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 04:19:06.074665  494861 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 04:19:06.077546  494861 out.go:252]   - Booting up control plane ...
	I1124 04:19:06.077670  494861 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 04:19:06.077752  494861 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 04:19:06.077827  494861 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 04:19:06.077976  494861 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 04:19:06.078088  494861 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 04:19:06.078194  494861 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 04:19:06.078278  494861 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 04:19:06.078317  494861 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 04:19:06.078497  494861 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 04:19:06.078606  494861 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 04:19:06.078664  494861 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501102754s
	I1124 04:19:06.078761  494861 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 04:19:06.078849  494861 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1124 04:19:06.078942  494861 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 04:19:06.079025  494861 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 04:19:06.079104  494861 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.955459579s
	I1124 04:19:06.079184  494861 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.392572085s
	I1124 04:19:06.079255  494861 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.502093165s
	I1124 04:19:06.079360  494861 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 04:19:06.079500  494861 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 04:19:06.079583  494861 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 04:19:06.079787  494861 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-543467 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 04:19:06.079843  494861 kubeadm.go:319] [bootstrap-token] Using token: fv4g1e.aq6mburzhrsctsw7
	I1124 04:19:06.083127  494861 out.go:252]   - Configuring RBAC rules ...
	I1124 04:19:06.083269  494861 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 04:19:06.083361  494861 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 04:19:06.083516  494861 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 04:19:06.083698  494861 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 04:19:06.083838  494861 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 04:19:06.083923  494861 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 04:19:06.084044  494861 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 04:19:06.084107  494861 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 04:19:06.084174  494861 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 04:19:06.084201  494861 kubeadm.go:319] 
	I1124 04:19:06.084277  494861 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 04:19:06.084288  494861 kubeadm.go:319] 
	I1124 04:19:06.084385  494861 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 04:19:06.084391  494861 kubeadm.go:319] 
	I1124 04:19:06.084417  494861 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 04:19:06.084477  494861 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 04:19:06.084535  494861 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 04:19:06.084549  494861 kubeadm.go:319] 
	I1124 04:19:06.084605  494861 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 04:19:06.084615  494861 kubeadm.go:319] 
	I1124 04:19:06.084662  494861 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 04:19:06.084671  494861 kubeadm.go:319] 
	I1124 04:19:06.084723  494861 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 04:19:06.084801  494861 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 04:19:06.084875  494861 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 04:19:06.084884  494861 kubeadm.go:319] 
	I1124 04:19:06.084970  494861 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 04:19:06.085050  494861 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 04:19:06.085064  494861 kubeadm.go:319] 
	I1124 04:19:06.085148  494861 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token fv4g1e.aq6mburzhrsctsw7 \
	I1124 04:19:06.085255  494861 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2194168f8292dc0469a2699229b86460c4906ab7e633f8753935c94ddff37855 \
	I1124 04:19:06.085280  494861 kubeadm.go:319] 	--control-plane 
	I1124 04:19:06.085287  494861 kubeadm.go:319] 
	I1124 04:19:06.085372  494861 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 04:19:06.085380  494861 kubeadm.go:319] 
	I1124 04:19:06.085463  494861 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token fv4g1e.aq6mburzhrsctsw7 \
	I1124 04:19:06.085582  494861 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2194168f8292dc0469a2699229b86460c4906ab7e633f8753935c94ddff37855 
	I1124 04:19:06.085594  494861 cni.go:84] Creating CNI manager for ""
	I1124 04:19:06.085602  494861 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:19:06.088940  494861 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 04:19:06.091882  494861 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 04:19:06.096230  494861 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 04:19:06.096254  494861 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 04:19:06.111451  494861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 04:19:06.420880  494861 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 04:19:06.421017  494861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:19:06.421111  494861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-543467 minikube.k8s.io/updated_at=2025_11_24T04_19_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=newest-cni-543467 minikube.k8s.io/primary=true
	I1124 04:19:06.433990  494861 ops.go:34] apiserver oom_adj: -16
	I1124 04:19:06.566098  494861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:19:07.066652  494861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:19:07.566618  494861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:19:08.066682  494861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:19:08.566203  494861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1124 04:19:07.951836  490948 node_ready.go:57] node "default-k8s-diff-port-303179" has "Ready":"False" status (will retry)
	I1124 04:19:08.951756  490948 node_ready.go:49] node "default-k8s-diff-port-303179" is "Ready"
	I1124 04:19:08.951785  490948 node_ready.go:38] duration metric: took 40.503142724s for node "default-k8s-diff-port-303179" to be "Ready" ...
	I1124 04:19:08.951800  490948 api_server.go:52] waiting for apiserver process to appear ...
	I1124 04:19:08.951861  490948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 04:19:08.979195  490948 api_server.go:72] duration metric: took 42.170762822s to wait for apiserver process to appear ...
	I1124 04:19:08.979222  490948 api_server.go:88] waiting for apiserver healthz status ...
	I1124 04:19:08.979242  490948 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1124 04:19:08.990120  490948 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1124 04:19:08.992403  490948 api_server.go:141] control plane version: v1.34.1
	I1124 04:19:08.992438  490948 api_server.go:131] duration metric: took 13.208735ms to wait for apiserver health ...
	I1124 04:19:08.992448  490948 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 04:19:08.996819  490948 system_pods.go:59] 8 kube-system pods found
	I1124 04:19:08.996863  490948 system_pods.go:61] "coredns-66bc5c9577-jtn7v" [cd5d148d-8e9e-4bac-a54c-d71637a8cb0c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:19:08.996871  490948 system_pods.go:61] "etcd-default-k8s-diff-port-303179" [e10607ab-490f-4a61-a1f9-a3c5c06f86b7] Running
	I1124 04:19:08.996877  490948 system_pods.go:61] "kindnet-wpp6p" [0a1f5799-1a90-4c0a-a0a3-9508d80fd8f3] Running
	I1124 04:19:08.996881  490948 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-303179" [6f48a510-e83c-4667-a542-5953227201ff] Running
	I1124 04:19:08.996888  490948 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-303179" [6f1d9347-dbe0-4770-b829-de7cf4fe9934] Running
	I1124 04:19:08.996892  490948 system_pods.go:61] "kube-proxy-dxbvb" [24177ca5-eb2f-4ac2-a32c-d384781bad58] Running
	I1124 04:19:08.996900  490948 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-303179" [b819c0ad-3c09-46e4-84a8-e7f1ad21b768] Running
	I1124 04:19:08.996909  490948 system_pods.go:61] "storage-provisioner" [4d7d1174-e169-4297-a8a2-55a47f03d9d6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 04:19:08.996916  490948 system_pods.go:74] duration metric: took 4.461625ms to wait for pod list to return data ...
	I1124 04:19:08.996931  490948 default_sa.go:34] waiting for default service account to be created ...
	I1124 04:19:09.008242  490948 default_sa.go:45] found service account: "default"
	I1124 04:19:09.008273  490948 default_sa.go:55] duration metric: took 11.334734ms for default service account to be created ...
	I1124 04:19:09.008283  490948 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 04:19:09.023629  490948 system_pods.go:86] 8 kube-system pods found
	I1124 04:19:09.023673  490948 system_pods.go:89] "coredns-66bc5c9577-jtn7v" [cd5d148d-8e9e-4bac-a54c-d71637a8cb0c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:19:09.023681  490948 system_pods.go:89] "etcd-default-k8s-diff-port-303179" [e10607ab-490f-4a61-a1f9-a3c5c06f86b7] Running
	I1124 04:19:09.023689  490948 system_pods.go:89] "kindnet-wpp6p" [0a1f5799-1a90-4c0a-a0a3-9508d80fd8f3] Running
	I1124 04:19:09.023717  490948 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-303179" [6f48a510-e83c-4667-a542-5953227201ff] Running
	I1124 04:19:09.023728  490948 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-303179" [6f1d9347-dbe0-4770-b829-de7cf4fe9934] Running
	I1124 04:19:09.023742  490948 system_pods.go:89] "kube-proxy-dxbvb" [24177ca5-eb2f-4ac2-a32c-d384781bad58] Running
	I1124 04:19:09.023747  490948 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-303179" [b819c0ad-3c09-46e4-84a8-e7f1ad21b768] Running
	I1124 04:19:09.023766  490948 system_pods.go:89] "storage-provisioner" [4d7d1174-e169-4297-a8a2-55a47f03d9d6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 04:19:09.023793  490948 retry.go:31] will retry after 258.421636ms: missing components: kube-dns
	I1124 04:19:09.286758  490948 system_pods.go:86] 8 kube-system pods found
	I1124 04:19:09.286794  490948 system_pods.go:89] "coredns-66bc5c9577-jtn7v" [cd5d148d-8e9e-4bac-a54c-d71637a8cb0c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:19:09.286802  490948 system_pods.go:89] "etcd-default-k8s-diff-port-303179" [e10607ab-490f-4a61-a1f9-a3c5c06f86b7] Running
	I1124 04:19:09.286809  490948 system_pods.go:89] "kindnet-wpp6p" [0a1f5799-1a90-4c0a-a0a3-9508d80fd8f3] Running
	I1124 04:19:09.286855  490948 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-303179" [6f48a510-e83c-4667-a542-5953227201ff] Running
	I1124 04:19:09.286861  490948 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-303179" [6f1d9347-dbe0-4770-b829-de7cf4fe9934] Running
	I1124 04:19:09.286866  490948 system_pods.go:89] "kube-proxy-dxbvb" [24177ca5-eb2f-4ac2-a32c-d384781bad58] Running
	I1124 04:19:09.286874  490948 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-303179" [b819c0ad-3c09-46e4-84a8-e7f1ad21b768] Running
	I1124 04:19:09.286881  490948 system_pods.go:89] "storage-provisioner" [4d7d1174-e169-4297-a8a2-55a47f03d9d6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 04:19:09.286911  490948 retry.go:31] will retry after 366.158699ms: missing components: kube-dns
	I1124 04:19:09.657730  490948 system_pods.go:86] 8 kube-system pods found
	I1124 04:19:09.657768  490948 system_pods.go:89] "coredns-66bc5c9577-jtn7v" [cd5d148d-8e9e-4bac-a54c-d71637a8cb0c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:19:09.657777  490948 system_pods.go:89] "etcd-default-k8s-diff-port-303179" [e10607ab-490f-4a61-a1f9-a3c5c06f86b7] Running
	I1124 04:19:09.657784  490948 system_pods.go:89] "kindnet-wpp6p" [0a1f5799-1a90-4c0a-a0a3-9508d80fd8f3] Running
	I1124 04:19:09.657822  490948 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-303179" [6f48a510-e83c-4667-a542-5953227201ff] Running
	I1124 04:19:09.657835  490948 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-303179" [6f1d9347-dbe0-4770-b829-de7cf4fe9934] Running
	I1124 04:19:09.657840  490948 system_pods.go:89] "kube-proxy-dxbvb" [24177ca5-eb2f-4ac2-a32c-d384781bad58] Running
	I1124 04:19:09.657848  490948 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-303179" [b819c0ad-3c09-46e4-84a8-e7f1ad21b768] Running
	I1124 04:19:09.657857  490948 system_pods.go:89] "storage-provisioner" [4d7d1174-e169-4297-a8a2-55a47f03d9d6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 04:19:09.657886  490948 retry.go:31] will retry after 419.368589ms: missing components: kube-dns
	I1124 04:19:10.093096  490948 system_pods.go:86] 8 kube-system pods found
	I1124 04:19:10.093131  490948 system_pods.go:89] "coredns-66bc5c9577-jtn7v" [cd5d148d-8e9e-4bac-a54c-d71637a8cb0c] Running
	I1124 04:19:10.093139  490948 system_pods.go:89] "etcd-default-k8s-diff-port-303179" [e10607ab-490f-4a61-a1f9-a3c5c06f86b7] Running
	I1124 04:19:10.093146  490948 system_pods.go:89] "kindnet-wpp6p" [0a1f5799-1a90-4c0a-a0a3-9508d80fd8f3] Running
	I1124 04:19:10.093184  490948 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-303179" [6f48a510-e83c-4667-a542-5953227201ff] Running
	I1124 04:19:10.093197  490948 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-303179" [6f1d9347-dbe0-4770-b829-de7cf4fe9934] Running
	I1124 04:19:10.093203  490948 system_pods.go:89] "kube-proxy-dxbvb" [24177ca5-eb2f-4ac2-a32c-d384781bad58] Running
	I1124 04:19:10.093210  490948 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-303179" [b819c0ad-3c09-46e4-84a8-e7f1ad21b768] Running
	I1124 04:19:10.093219  490948 system_pods.go:89] "storage-provisioner" [4d7d1174-e169-4297-a8a2-55a47f03d9d6] Running
	I1124 04:19:10.093228  490948 system_pods.go:126] duration metric: took 1.0849378s to wait for k8s-apps to be running ...
	I1124 04:19:10.093250  490948 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 04:19:10.093332  490948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:19:10.113102  490948 system_svc.go:56] duration metric: took 19.84413ms WaitForService to wait for kubelet
	I1124 04:19:10.113179  490948 kubeadm.go:587] duration metric: took 43.304751081s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 04:19:10.113214  490948 node_conditions.go:102] verifying NodePressure condition ...
	I1124 04:19:10.117237  490948 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 04:19:10.117318  490948 node_conditions.go:123] node cpu capacity is 2
	I1124 04:19:10.117348  490948 node_conditions.go:105] duration metric: took 4.107636ms to run NodePressure ...
	I1124 04:19:10.117375  490948 start.go:242] waiting for startup goroutines ...
	I1124 04:19:10.117406  490948 start.go:247] waiting for cluster config update ...
	I1124 04:19:10.117434  490948 start.go:256] writing updated cluster config ...
	I1124 04:19:10.117843  490948 ssh_runner.go:195] Run: rm -f paused
	I1124 04:19:10.122547  490948 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 04:19:10.126839  490948 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jtn7v" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:19:10.133066  490948 pod_ready.go:94] pod "coredns-66bc5c9577-jtn7v" is "Ready"
	I1124 04:19:10.133149  490948 pod_ready.go:86] duration metric: took 6.232912ms for pod "coredns-66bc5c9577-jtn7v" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:19:10.136035  490948 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-303179" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:19:10.141786  490948 pod_ready.go:94] pod "etcd-default-k8s-diff-port-303179" is "Ready"
	I1124 04:19:10.141865  490948 pod_ready.go:86] duration metric: took 5.756418ms for pod "etcd-default-k8s-diff-port-303179" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:19:10.145051  490948 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-303179" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:19:10.156029  490948 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-303179" is "Ready"
	I1124 04:19:10.156105  490948 pod_ready.go:86] duration metric: took 10.974369ms for pod "kube-apiserver-default-k8s-diff-port-303179" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:19:10.161072  490948 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-303179" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:19:10.527521  490948 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-303179" is "Ready"
	I1124 04:19:10.527614  490948 pod_ready.go:86] duration metric: took 366.470415ms for pod "kube-controller-manager-default-k8s-diff-port-303179" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:19:09.066742  494861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:19:09.567207  494861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:19:10.067075  494861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:19:10.566418  494861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:19:10.718390  494861 kubeadm.go:1114] duration metric: took 4.297417329s to wait for elevateKubeSystemPrivileges
	I1124 04:19:10.718423  494861 kubeadm.go:403] duration metric: took 22.889340241s to StartCluster
	I1124 04:19:10.718441  494861 settings.go:142] acquiring lock: {Name:mkbb4fe81234aed150705ba63a85f83232f71688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:10.718520  494861 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:19:10.719515  494861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/kubeconfig: {Name:mkb927d54c1c5489b1562496c31c4a11f46f8c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:10.719739  494861 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 04:19:10.719844  494861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 04:19:10.720102  494861 config.go:182] Loaded profile config "newest-cni-543467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:19:10.720148  494861 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 04:19:10.720212  494861 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-543467"
	I1124 04:19:10.720232  494861 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-543467"
	I1124 04:19:10.720257  494861 host.go:66] Checking if "newest-cni-543467" exists ...
	I1124 04:19:10.720753  494861 cli_runner.go:164] Run: docker container inspect newest-cni-543467 --format={{.State.Status}}
	I1124 04:19:10.721436  494861 addons.go:70] Setting default-storageclass=true in profile "newest-cni-543467"
	I1124 04:19:10.721460  494861 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-543467"
	I1124 04:19:10.721745  494861 cli_runner.go:164] Run: docker container inspect newest-cni-543467 --format={{.State.Status}}
	I1124 04:19:10.723194  494861 out.go:179] * Verifying Kubernetes components...
	I1124 04:19:10.728900  494861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:19:10.776069  494861 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 04:19:10.727427  490948 pod_ready.go:83] waiting for pod "kube-proxy-dxbvb" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:19:11.127443  490948 pod_ready.go:94] pod "kube-proxy-dxbvb" is "Ready"
	I1124 04:19:11.127466  490948 pod_ready.go:86] duration metric: took 399.977226ms for pod "kube-proxy-dxbvb" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:19:11.327442  490948 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-303179" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:19:11.726641  490948 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-303179" is "Ready"
	I1124 04:19:11.726673  490948 pod_ready.go:86] duration metric: took 399.200059ms for pod "kube-scheduler-default-k8s-diff-port-303179" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:19:11.726689  490948 pod_ready.go:40] duration metric: took 1.6040601s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 04:19:11.818631  490948 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 04:19:11.821985  490948 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-303179" cluster and "default" namespace by default
	I1124 04:19:10.778950  494861 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:19:10.778980  494861 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 04:19:10.779049  494861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-543467
	I1124 04:19:10.780534  494861 addons.go:239] Setting addon default-storageclass=true in "newest-cni-543467"
	I1124 04:19:10.780577  494861 host.go:66] Checking if "newest-cni-543467" exists ...
	I1124 04:19:10.780990  494861 cli_runner.go:164] Run: docker container inspect newest-cni-543467 --format={{.State.Status}}
	I1124 04:19:10.826738  494861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/newest-cni-543467/id_rsa Username:docker}
	I1124 04:19:10.829745  494861 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 04:19:10.829763  494861 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 04:19:10.829833  494861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-543467
	I1124 04:19:10.867068  494861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/newest-cni-543467/id_rsa Username:docker}
	I1124 04:19:11.133600  494861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 04:19:11.142815  494861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:19:11.176870  494861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 04:19:11.176976  494861 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:19:12.053750  494861 api_server.go:52] waiting for apiserver process to appear ...
	I1124 04:19:12.054083  494861 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 04:19:12.054387  494861 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1124 04:19:12.055996  494861 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1124 04:19:12.059771  494861 addons.go:530] duration metric: took 1.339610173s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1124 04:19:12.088340  494861 api_server.go:72] duration metric: took 1.368560236s to wait for apiserver process to appear ...
	I1124 04:19:12.088363  494861 api_server.go:88] waiting for apiserver healthz status ...
	I1124 04:19:12.088381  494861 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 04:19:12.143234  494861 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1124 04:19:12.145209  494861 api_server.go:141] control plane version: v1.34.1
	I1124 04:19:12.145234  494861 api_server.go:131] duration metric: took 56.86479ms to wait for apiserver health ...
	I1124 04:19:12.145244  494861 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 04:19:12.156916  494861 system_pods.go:59] 9 kube-system pods found
	I1124 04:19:12.156953  494861 system_pods.go:61] "coredns-66bc5c9577-crwzn" [0bcadfb0-f95f-42d3-b0c7-cd5d9056a0d6] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 04:19:12.156961  494861 system_pods.go:61] "coredns-66bc5c9577-vzrl4" [76f68bfa-2861-42a6-9d7e-b7cd25b012b6] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 04:19:12.156968  494861 system_pods.go:61] "etcd-newest-cni-543467" [29381de0-7791-441e-a513-e35979ea0dd7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 04:19:12.156973  494861 system_pods.go:61] "kindnet-pzzgc" [298acecf-f8cf-46d2-bbfd-a73a057da8e8] Running
	I1124 04:19:12.156979  494861 system_pods.go:61] "kube-apiserver-newest-cni-543467" [07759926-5918-4158-84d1-c81b1a145e23] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 04:19:12.156987  494861 system_pods.go:61] "kube-controller-manager-newest-cni-543467" [252887d9-6a65-4755-b572-46d4cc1edca3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 04:19:12.156992  494861 system_pods.go:61] "kube-proxy-m2jcg" [10608e3c-2678-4bf9-9225-5b6421a2204c] Running
	I1124 04:19:12.156998  494861 system_pods.go:61] "kube-scheduler-newest-cni-543467" [2450721f-3e27-4653-8f61-54d10ec8cae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 04:19:12.157003  494861 system_pods.go:61] "storage-provisioner" [8602427f-09dd-41e4-92f2-b6aacf0608e8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 04:19:12.157009  494861 system_pods.go:74] duration metric: took 11.759363ms to wait for pod list to return data ...
	I1124 04:19:12.157017  494861 default_sa.go:34] waiting for default service account to be created ...
	I1124 04:19:12.163846  494861 default_sa.go:45] found service account: "default"
	I1124 04:19:12.163878  494861 default_sa.go:55] duration metric: took 6.854377ms for default service account to be created ...
	I1124 04:19:12.163893  494861 kubeadm.go:587] duration metric: took 1.444120276s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 04:19:12.163912  494861 node_conditions.go:102] verifying NodePressure condition ...
	I1124 04:19:12.175414  494861 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 04:19:12.175444  494861 node_conditions.go:123] node cpu capacity is 2
	I1124 04:19:12.175458  494861 node_conditions.go:105] duration metric: took 11.5403ms to run NodePressure ...
	I1124 04:19:12.175471  494861 start.go:242] waiting for startup goroutines ...
	I1124 04:19:12.558827  494861 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-543467" context rescaled to 1 replicas
	I1124 04:19:12.558925  494861 start.go:247] waiting for cluster config update ...
	I1124 04:19:12.558953  494861 start.go:256] writing updated cluster config ...
	I1124 04:19:12.559291  494861 ssh_runner.go:195] Run: rm -f paused
	I1124 04:19:12.618652  494861 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 04:19:12.623896  494861 out.go:179] * Done! kubectl is now configured to use "newest-cni-543467" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 04:19:11 newest-cni-543467 crio[837]: time="2025-11-24T04:19:11.281667453Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:19:11 newest-cni-543467 crio[837]: time="2025-11-24T04:19:11.287699461Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=5ea6c5c4-816b-4600-88db-8a903c58faca name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 04:19:11 newest-cni-543467 crio[837]: time="2025-11-24T04:19:11.292537345Z" level=info msg="Ran pod sandbox 61fe87b8c958f1a6ef50a175c5b1c36c8d0598b16ccb03d8163c1467f884f05b with infra container: kube-system/kube-proxy-m2jcg/POD" id=5ea6c5c4-816b-4600-88db-8a903c58faca name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 04:19:11 newest-cni-543467 crio[837]: time="2025-11-24T04:19:11.294691857Z" level=info msg="Running pod sandbox: kube-system/kindnet-pzzgc/POD" id=64840daa-3668-4514-ad0d-2b183b5e18f5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 04:19:11 newest-cni-543467 crio[837]: time="2025-11-24T04:19:11.294750779Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:19:11 newest-cni-543467 crio[837]: time="2025-11-24T04:19:11.299237997Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=c34298b1-cba7-40a3-b478-b62f4ce343eb name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:19:11 newest-cni-543467 crio[837]: time="2025-11-24T04:19:11.311829157Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=7c4f0801-a2ca-4f14-8845-7f59483bb2e9 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:19:11 newest-cni-543467 crio[837]: time="2025-11-24T04:19:11.319423153Z" level=info msg="Creating container: kube-system/kube-proxy-m2jcg/kube-proxy" id=73ddeb9c-2cd4-4536-a0fd-a20390b00f52 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:19:11 newest-cni-543467 crio[837]: time="2025-11-24T04:19:11.319531979Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:19:11 newest-cni-543467 crio[837]: time="2025-11-24T04:19:11.319941912Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=64840daa-3668-4514-ad0d-2b183b5e18f5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 04:19:11 newest-cni-543467 crio[837]: time="2025-11-24T04:19:11.325557102Z" level=info msg="Ran pod sandbox a17cc06b3858a8a71391fe55b5c20b36abab3d7fbb2946b8876b2d3c6d0fe06f with infra container: kube-system/kindnet-pzzgc/POD" id=64840daa-3668-4514-ad0d-2b183b5e18f5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 04:19:11 newest-cni-543467 crio[837]: time="2025-11-24T04:19:11.33135252Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=a3cee21a-6fc1-4535-a70e-300688371c2a name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:19:11 newest-cni-543467 crio[837]: time="2025-11-24T04:19:11.331988951Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:19:11 newest-cni-543467 crio[837]: time="2025-11-24T04:19:11.332603998Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:19:11 newest-cni-543467 crio[837]: time="2025-11-24T04:19:11.336053985Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=1a74b00a-78dd-47ff-b256-13c3a2453ded name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:19:11 newest-cni-543467 crio[837]: time="2025-11-24T04:19:11.342689536Z" level=info msg="Creating container: kube-system/kindnet-pzzgc/kindnet-cni" id=f146179c-f1f9-43a0-8c44-26b04937f9a8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:19:11 newest-cni-543467 crio[837]: time="2025-11-24T04:19:11.342783781Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:19:11 newest-cni-543467 crio[837]: time="2025-11-24T04:19:11.347389876Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:19:11 newest-cni-543467 crio[837]: time="2025-11-24T04:19:11.348013556Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:19:11 newest-cni-543467 crio[837]: time="2025-11-24T04:19:11.369064486Z" level=info msg="Created container 6fa0939149e97861a449feea7593c9a24da546423367822d04a48e0e4f09600a: kube-system/kube-proxy-m2jcg/kube-proxy" id=73ddeb9c-2cd4-4536-a0fd-a20390b00f52 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:19:11 newest-cni-543467 crio[837]: time="2025-11-24T04:19:11.370768835Z" level=info msg="Starting container: 6fa0939149e97861a449feea7593c9a24da546423367822d04a48e0e4f09600a" id=1992e74f-dd6a-41c6-b2c5-3f1f1b2a86e7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:19:11 newest-cni-543467 crio[837]: time="2025-11-24T04:19:11.382573015Z" level=info msg="Started container" PID=1484 containerID=6fa0939149e97861a449feea7593c9a24da546423367822d04a48e0e4f09600a description=kube-system/kube-proxy-m2jcg/kube-proxy id=1992e74f-dd6a-41c6-b2c5-3f1f1b2a86e7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=61fe87b8c958f1a6ef50a175c5b1c36c8d0598b16ccb03d8163c1467f884f05b
	Nov 24 04:19:11 newest-cni-543467 crio[837]: time="2025-11-24T04:19:11.384592388Z" level=info msg="Created container 7cf65bc12e4bccc47c963f092542a3af5cb3225718967f2f1da5590607029cb2: kube-system/kindnet-pzzgc/kindnet-cni" id=f146179c-f1f9-43a0-8c44-26b04937f9a8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:19:11 newest-cni-543467 crio[837]: time="2025-11-24T04:19:11.388514585Z" level=info msg="Starting container: 7cf65bc12e4bccc47c963f092542a3af5cb3225718967f2f1da5590607029cb2" id=78fdc599-b3fc-4a56-92d9-bd65e319443b name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:19:11 newest-cni-543467 crio[837]: time="2025-11-24T04:19:11.400204869Z" level=info msg="Started container" PID=1492 containerID=7cf65bc12e4bccc47c963f092542a3af5cb3225718967f2f1da5590607029cb2 description=kube-system/kindnet-pzzgc/kindnet-cni id=78fdc599-b3fc-4a56-92d9-bd65e319443b name=/runtime.v1.RuntimeService/StartContainer sandboxID=a17cc06b3858a8a71391fe55b5c20b36abab3d7fbb2946b8876b2d3c6d0fe06f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	7cf65bc12e4bc       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 seconds ago       Running             kindnet-cni               0                   a17cc06b3858a       kindnet-pzzgc                               kube-system
	6fa0939149e97       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 seconds ago       Running             kube-proxy                0                   61fe87b8c958f       kube-proxy-m2jcg                            kube-system
	8f35bccd243b4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   16 seconds ago      Running             kube-scheduler            0                   ec3686600b931       kube-scheduler-newest-cni-543467            kube-system
	13a400f68de70       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   16 seconds ago      Running             etcd                      0                   92f7f2c08225c       etcd-newest-cni-543467                      kube-system
	f9f9af34de23b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   16 seconds ago      Running             kube-controller-manager   0                   55402cf179709       kube-controller-manager-newest-cni-543467   kube-system
	6b2145a7a089e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   16 seconds ago      Running             kube-apiserver            0                   d43cca4dfe476       kube-apiserver-newest-cni-543467            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-543467
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-543467
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=newest-cni-543467
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T04_19_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 04:19:02 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-543467
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 04:19:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 04:19:05 +0000   Mon, 24 Nov 2025 04:18:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 04:19:05 +0000   Mon, 24 Nov 2025 04:18:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 04:19:05 +0000   Mon, 24 Nov 2025 04:18:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 24 Nov 2025 04:19:05 +0000   Mon, 24 Nov 2025 04:18:57 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-543467
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 304a86241bf1bbb85bd31db5692386d7
	  System UUID:                9187578d-6ec8-41b4-a303-b1b23fbde790
	  Boot ID:                    e6ca431c-3a35-478f-87f6-f49cc4bc8a65
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-543467                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9s
	  kube-system                 kindnet-pzzgc                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4s
	  kube-system                 kube-apiserver-newest-cni-543467             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 kube-controller-manager-newest-cni-543467    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-m2jcg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-newest-cni-543467             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2s                 kube-proxy       
	  Normal   Starting                 18s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  18s (x8 over 18s)  kubelet          Node newest-cni-543467 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18s (x8 over 18s)  kubelet          Node newest-cni-543467 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18s (x8 over 18s)  kubelet          Node newest-cni-543467 status is now: NodeHasSufficientPID
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9s                 kubelet          Node newest-cni-543467 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s                 kubelet          Node newest-cni-543467 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s                 kubelet          Node newest-cni-543467 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-543467 event: Registered Node newest-cni-543467 in Controller
	
	
	==> dmesg <==
	[Nov24 03:56] overlayfs: idmapped layers are currently not supported
	[Nov24 03:57] overlayfs: idmapped layers are currently not supported
	[  +3.077077] overlayfs: idmapped layers are currently not supported
	[Nov24 03:58] overlayfs: idmapped layers are currently not supported
	[ +17.825126] overlayfs: idmapped layers are currently not supported
	[Nov24 03:59] overlayfs: idmapped layers are currently not supported
	[ +28.164324] overlayfs: idmapped layers are currently not supported
	[Nov24 04:01] overlayfs: idmapped layers are currently not supported
	[Nov24 04:02] overlayfs: idmapped layers are currently not supported
	[Nov24 04:04] overlayfs: idmapped layers are currently not supported
	[Nov24 04:05] overlayfs: idmapped layers are currently not supported
	[Nov24 04:06] overlayfs: idmapped layers are currently not supported
	[Nov24 04:08] overlayfs: idmapped layers are currently not supported
	[Nov24 04:10] overlayfs: idmapped layers are currently not supported
	[Nov24 04:11] overlayfs: idmapped layers are currently not supported
	[ +23.918932] overlayfs: idmapped layers are currently not supported
	[Nov24 04:12] overlayfs: idmapped layers are currently not supported
	[ +35.202347] overlayfs: idmapped layers are currently not supported
	[Nov24 04:13] overlayfs: idmapped layers are currently not supported
	[Nov24 04:15] overlayfs: idmapped layers are currently not supported
	[ +47.476343] overlayfs: idmapped layers are currently not supported
	[Nov24 04:16] overlayfs: idmapped layers are currently not supported
	[Nov24 04:17] overlayfs: idmapped layers are currently not supported
	[Nov24 04:18] overlayfs: idmapped layers are currently not supported
	[ +43.060353] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [13a400f68de703e7da9dc880f9390976d9d80597b96d664d6637e345b3dbe829] <==
	{"level":"warn","ts":"2025-11-24T04:19:01.182689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:01.203940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:01.221721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:01.241187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:01.265661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:01.277026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:01.294376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:01.311196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:01.328413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:01.347811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:01.371784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:01.382168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:01.400545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:01.418933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:01.438535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:01.455524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:01.472987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:01.490663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:01.507066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:01.523790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:01.549767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:01.578952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:01.593926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:01.615361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:01.731273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56742","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 04:19:14 up  3:01,  0 user,  load average: 3.41, 3.37, 2.89
	Linux newest-cni-543467 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7cf65bc12e4bccc47c963f092542a3af5cb3225718967f2f1da5590607029cb2] <==
	I1124 04:19:11.447872       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 04:19:11.514702       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 04:19:11.514851       1 main.go:148] setting mtu 1500 for CNI 
	I1124 04:19:11.514864       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 04:19:11.514877       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T04:19:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 04:19:11.721403       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 04:19:11.722637       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 04:19:11.725025       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 04:19:11.729151       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [6b2145a7a089e8486e58f948c9a7581dd320263de36419c6c4b1b7b1c41ed55b] <==
	I1124 04:19:02.705623       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 04:19:02.709970       1 controller.go:667] quota admission added evaluator for: namespaces
	E1124 04:19:02.727924       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1124 04:19:02.812436       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 04:19:02.812528       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 04:19:02.831991       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 04:19:02.832557       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 04:19:02.932929       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 04:19:03.372484       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 04:19:03.377728       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 04:19:03.377768       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 04:19:04.122575       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 04:19:04.174037       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 04:19:04.281189       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 04:19:04.293359       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1124 04:19:04.294592       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 04:19:04.301983       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 04:19:04.525308       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 04:19:05.480847       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 04:19:05.518198       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 04:19:05.551555       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 04:19:10.231847       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 04:19:10.237014       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 04:19:10.279700       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 04:19:10.631346       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [f9f9af34de23b0bcbea691ff6d991ec8a431dbc1eb89a304cf945700204c7d4c] <==
	I1124 04:19:09.528466       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 04:19:09.529404       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 04:19:09.535776       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 04:19:09.543125       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 04:19:09.544290       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 04:19:09.552925       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 04:19:09.555556       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 04:19:09.568978       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 04:19:09.569156       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 04:19:09.572270       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 04:19:09.573559       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 04:19:09.573672       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 04:19:09.573778       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 04:19:09.573899       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-543467"
	I1124 04:19:09.573973       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 04:19:09.574056       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 04:19:09.575002       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1124 04:19:09.579342       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 04:19:09.579524       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 04:19:09.579593       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 04:19:09.580199       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 04:19:09.581429       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 04:19:09.581591       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 04:19:09.583658       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 04:19:09.584103       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	
	
	==> kube-proxy [6fa0939149e97861a449feea7593c9a24da546423367822d04a48e0e4f09600a] <==
	I1124 04:19:11.502288       1 server_linux.go:53] "Using iptables proxy"
	I1124 04:19:11.584243       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 04:19:11.686544       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 04:19:11.686580       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 04:19:11.686661       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 04:19:11.867157       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 04:19:11.867212       1 server_linux.go:132] "Using iptables Proxier"
	I1124 04:19:11.880811       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 04:19:11.881264       1 server.go:527] "Version info" version="v1.34.1"
	I1124 04:19:11.881278       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:19:11.883337       1 config.go:200] "Starting service config controller"
	I1124 04:19:11.883349       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 04:19:11.883375       1 config.go:106] "Starting endpoint slice config controller"
	I1124 04:19:11.883379       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 04:19:11.883396       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 04:19:11.883401       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 04:19:11.884012       1 config.go:309] "Starting node config controller"
	I1124 04:19:11.884019       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 04:19:11.884030       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 04:19:11.983678       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 04:19:11.983720       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 04:19:11.983764       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8f35bccd243b4a68c3aa65c48cf652b8ed075c9867bf6a35ff8ac780f11a0480] <==
	E1124 04:19:02.575615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 04:19:02.575670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 04:19:02.575722       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 04:19:02.575768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 04:19:02.575816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 04:19:02.575862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 04:19:02.575909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 04:19:02.575960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 04:19:02.576008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 04:19:02.576057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 04:19:02.576145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 04:19:02.576200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 04:19:02.576250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 04:19:02.576306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 04:19:02.576360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 04:19:02.576483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1124 04:19:03.454490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 04:19:03.473514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 04:19:03.507295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 04:19:03.588190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 04:19:03.626336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 04:19:03.723058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 04:19:03.728555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 04:19:03.952326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1124 04:19:06.130440       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 04:19:05 newest-cni-543467 kubelet[1297]: I1124 04:19:05.808780    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ba442f69d1e6b0d8709e059aec47fa9b-etc-ca-certificates\") pod \"kube-apiserver-newest-cni-543467\" (UID: \"ba442f69d1e6b0d8709e059aec47fa9b\") " pod="kube-system/kube-apiserver-newest-cni-543467"
	Nov 24 04:19:05 newest-cni-543467 kubelet[1297]: I1124 04:19:05.808797    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ba442f69d1e6b0d8709e059aec47fa9b-k8s-certs\") pod \"kube-apiserver-newest-cni-543467\" (UID: \"ba442f69d1e6b0d8709e059aec47fa9b\") " pod="kube-system/kube-apiserver-newest-cni-543467"
	Nov 24 04:19:06 newest-cni-543467 kubelet[1297]: I1124 04:19:06.382090    1297 apiserver.go:52] "Watching apiserver"
	Nov 24 04:19:06 newest-cni-543467 kubelet[1297]: I1124 04:19:06.408116    1297 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 24 04:19:06 newest-cni-543467 kubelet[1297]: I1124 04:19:06.605333    1297 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-543467"
	Nov 24 04:19:06 newest-cni-543467 kubelet[1297]: I1124 04:19:06.606421    1297 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-543467"
	Nov 24 04:19:06 newest-cni-543467 kubelet[1297]: E1124 04:19:06.625625    1297 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-543467\" already exists" pod="kube-system/kube-apiserver-newest-cni-543467"
	Nov 24 04:19:06 newest-cni-543467 kubelet[1297]: E1124 04:19:06.628343    1297 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-543467\" already exists" pod="kube-system/kube-controller-manager-newest-cni-543467"
	Nov 24 04:19:06 newest-cni-543467 kubelet[1297]: I1124 04:19:06.653694    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-543467" podStartSLOduration=1.653673737 podStartE2EDuration="1.653673737s" podCreationTimestamp="2025-11-24 04:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 04:19:06.641203793 +0000 UTC m=+1.343749727" watchObservedRunningTime="2025-11-24 04:19:06.653673737 +0000 UTC m=+1.356219663"
	Nov 24 04:19:06 newest-cni-543467 kubelet[1297]: I1124 04:19:06.654126    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-543467" podStartSLOduration=2.654115203 podStartE2EDuration="2.654115203s" podCreationTimestamp="2025-11-24 04:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 04:19:06.653901022 +0000 UTC m=+1.356446956" watchObservedRunningTime="2025-11-24 04:19:06.654115203 +0000 UTC m=+1.356661146"
	Nov 24 04:19:06 newest-cni-543467 kubelet[1297]: I1124 04:19:06.691824    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-543467" podStartSLOduration=1.6917936930000002 podStartE2EDuration="1.691793693s" podCreationTimestamp="2025-11-24 04:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 04:19:06.67314051 +0000 UTC m=+1.375686444" watchObservedRunningTime="2025-11-24 04:19:06.691793693 +0000 UTC m=+1.394339627"
	Nov 24 04:19:06 newest-cni-543467 kubelet[1297]: I1124 04:19:06.713170    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-543467" podStartSLOduration=3.713139224 podStartE2EDuration="3.713139224s" podCreationTimestamp="2025-11-24 04:19:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 04:19:06.692765052 +0000 UTC m=+1.395310978" watchObservedRunningTime="2025-11-24 04:19:06.713139224 +0000 UTC m=+1.415685158"
	Nov 24 04:19:09 newest-cni-543467 kubelet[1297]: I1124 04:19:09.541462    1297 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 24 04:19:09 newest-cni-543467 kubelet[1297]: I1124 04:19:09.542163    1297 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 24 04:19:10 newest-cni-543467 kubelet[1297]: I1124 04:19:10.751423    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/10608e3c-2678-4bf9-9225-5b6421a2204c-kube-proxy\") pod \"kube-proxy-m2jcg\" (UID: \"10608e3c-2678-4bf9-9225-5b6421a2204c\") " pod="kube-system/kube-proxy-m2jcg"
	Nov 24 04:19:10 newest-cni-543467 kubelet[1297]: I1124 04:19:10.751472    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10608e3c-2678-4bf9-9225-5b6421a2204c-xtables-lock\") pod \"kube-proxy-m2jcg\" (UID: \"10608e3c-2678-4bf9-9225-5b6421a2204c\") " pod="kube-system/kube-proxy-m2jcg"
	Nov 24 04:19:10 newest-cni-543467 kubelet[1297]: I1124 04:19:10.751494    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/298acecf-f8cf-46d2-bbfd-a73a057da8e8-xtables-lock\") pod \"kindnet-pzzgc\" (UID: \"298acecf-f8cf-46d2-bbfd-a73a057da8e8\") " pod="kube-system/kindnet-pzzgc"
	Nov 24 04:19:10 newest-cni-543467 kubelet[1297]: I1124 04:19:10.751668    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r74pn\" (UniqueName: \"kubernetes.io/projected/298acecf-f8cf-46d2-bbfd-a73a057da8e8-kube-api-access-r74pn\") pod \"kindnet-pzzgc\" (UID: \"298acecf-f8cf-46d2-bbfd-a73a057da8e8\") " pod="kube-system/kindnet-pzzgc"
	Nov 24 04:19:10 newest-cni-543467 kubelet[1297]: I1124 04:19:10.751708    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znrjp\" (UniqueName: \"kubernetes.io/projected/10608e3c-2678-4bf9-9225-5b6421a2204c-kube-api-access-znrjp\") pod \"kube-proxy-m2jcg\" (UID: \"10608e3c-2678-4bf9-9225-5b6421a2204c\") " pod="kube-system/kube-proxy-m2jcg"
	Nov 24 04:19:10 newest-cni-543467 kubelet[1297]: I1124 04:19:10.751727    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/298acecf-f8cf-46d2-bbfd-a73a057da8e8-cni-cfg\") pod \"kindnet-pzzgc\" (UID: \"298acecf-f8cf-46d2-bbfd-a73a057da8e8\") " pod="kube-system/kindnet-pzzgc"
	Nov 24 04:19:10 newest-cni-543467 kubelet[1297]: I1124 04:19:10.751748    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10608e3c-2678-4bf9-9225-5b6421a2204c-lib-modules\") pod \"kube-proxy-m2jcg\" (UID: \"10608e3c-2678-4bf9-9225-5b6421a2204c\") " pod="kube-system/kube-proxy-m2jcg"
	Nov 24 04:19:10 newest-cni-543467 kubelet[1297]: I1124 04:19:10.751820    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/298acecf-f8cf-46d2-bbfd-a73a057da8e8-lib-modules\") pod \"kindnet-pzzgc\" (UID: \"298acecf-f8cf-46d2-bbfd-a73a057da8e8\") " pod="kube-system/kindnet-pzzgc"
	Nov 24 04:19:10 newest-cni-543467 kubelet[1297]: I1124 04:19:10.983176    1297 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 24 04:19:11 newest-cni-543467 kubelet[1297]: I1124 04:19:11.681281    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m2jcg" podStartSLOduration=1.681262372 podStartE2EDuration="1.681262372s" podCreationTimestamp="2025-11-24 04:19:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 04:19:11.642916774 +0000 UTC m=+6.345462700" watchObservedRunningTime="2025-11-24 04:19:11.681262372 +0000 UTC m=+6.383808298"
	Nov 24 04:19:11 newest-cni-543467 kubelet[1297]: I1124 04:19:11.681390    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-pzzgc" podStartSLOduration=1.681385828 podStartE2EDuration="1.681385828s" podCreationTimestamp="2025-11-24 04:19:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 04:19:11.68081627 +0000 UTC m=+6.383362212" watchObservedRunningTime="2025-11-24 04:19:11.681385828 +0000 UTC m=+6.383931762"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-543467 -n newest-cni-543467
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-543467 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-crwzn storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-543467 describe pod coredns-66bc5c9577-crwzn storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-543467 describe pod coredns-66bc5c9577-crwzn storage-provisioner: exit status 1 (85.684552ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-crwzn" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-543467 describe pod coredns-66bc5c9577-crwzn storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-303179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-303179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (284.540302ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:19:20Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-303179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-303179 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-303179 describe deploy/metrics-server -n kube-system: exit status 1 (96.351256ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-303179 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-303179
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-303179:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c6af048d3f8eed722eea7223c0aae47b7c545b8922e6d139f9580c8eda525748",
	        "Created": "2025-11-24T04:17:56.199463475Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 491339,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T04:17:56.264034639Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fbb44bc62521f331457dff002aaa5e1e27856f9e53853b3b3ee62969be454028",
	        "ResolvConfPath": "/var/lib/docker/containers/c6af048d3f8eed722eea7223c0aae47b7c545b8922e6d139f9580c8eda525748/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c6af048d3f8eed722eea7223c0aae47b7c545b8922e6d139f9580c8eda525748/hostname",
	        "HostsPath": "/var/lib/docker/containers/c6af048d3f8eed722eea7223c0aae47b7c545b8922e6d139f9580c8eda525748/hosts",
	        "LogPath": "/var/lib/docker/containers/c6af048d3f8eed722eea7223c0aae47b7c545b8922e6d139f9580c8eda525748/c6af048d3f8eed722eea7223c0aae47b7c545b8922e6d139f9580c8eda525748-json.log",
	        "Name": "/default-k8s-diff-port-303179",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-303179:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-303179",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c6af048d3f8eed722eea7223c0aae47b7c545b8922e6d139f9580c8eda525748",
	                "LowerDir": "/var/lib/docker/overlay2/f795050361c122f8186f9d116815a241873f66c7dfed963bb16fb3ec6718f306-init/diff:/var/lib/docker/overlay2/d0aa28be488ed1454a92ae6ce1d851cbc3e880fd8a322d63788b1835a346ec13/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f795050361c122f8186f9d116815a241873f66c7dfed963bb16fb3ec6718f306/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f795050361c122f8186f9d116815a241873f66c7dfed963bb16fb3ec6718f306/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f795050361c122f8186f9d116815a241873f66c7dfed963bb16fb3ec6718f306/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-303179",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-303179/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-303179",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-303179",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-303179",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bb589447ded9744b01198c713228cc8c33410d191f6da6ff8665e3c0f31eb4a0",
	            "SandboxKey": "/var/run/docker/netns/bb589447ded9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-303179": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:6e:c1:12:aa:61",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7dd701f3791fa7f6d8831a64698f944225df32ea42e663c9bfc78d30eb09b5d6",
	                    "EndpointID": "2cc6359ab672fc283563814680ab771428cea343c085b5b3603008186e4f8d7e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-303179",
	                        "c6af048d3f8e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-303179 -n default-k8s-diff-port-303179
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-303179 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-303179 logs -n 25: (1.442089917s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-520529 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:15 UTC │ 24 Nov 25 04:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-600301 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │                     │
	│ stop    │ -p no-preload-600301 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │ 24 Nov 25 04:16 UTC │
	│ addons  │ enable dashboard -p no-preload-600301 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │ 24 Nov 25 04:16 UTC │
	│ start   │ -p no-preload-600301 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │ 24 Nov 25 04:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-520529 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │                     │
	│ stop    │ -p embed-certs-520529 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-520529 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ start   │ -p embed-certs-520529 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:18 UTC │
	│ image   │ no-preload-600301 image list --format=json                                                                                                                                                                                                    │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ pause   │ -p no-preload-600301 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │                     │
	│ delete  │ -p no-preload-600301                                                                                                                                                                                                                          │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ delete  │ -p no-preload-600301                                                                                                                                                                                                                          │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ delete  │ -p disable-driver-mounts-995056                                                                                                                                                                                                               │ disable-driver-mounts-995056 │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ start   │ -p default-k8s-diff-port-303179 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-303179 │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:19 UTC │
	│ image   │ embed-certs-520529 image list --format=json                                                                                                                                                                                                   │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │ 24 Nov 25 04:18 UTC │
	│ pause   │ -p embed-certs-520529 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │                     │
	│ delete  │ -p embed-certs-520529                                                                                                                                                                                                                         │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │ 24 Nov 25 04:18 UTC │
	│ delete  │ -p embed-certs-520529                                                                                                                                                                                                                         │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │ 24 Nov 25 04:18 UTC │
	│ start   │ -p newest-cni-543467 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │ 24 Nov 25 04:19 UTC │
	│ addons  │ enable metrics-server -p newest-cni-543467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │                     │
	│ stop    │ -p newest-cni-543467 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:19 UTC │
	│ addons  │ enable dashboard -p newest-cni-543467 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:19 UTC │
	│ start   │ -p newest-cni-543467 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-303179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-303179 │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 04:19:16
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 04:19:16.789796  498056 out.go:360] Setting OutFile to fd 1 ...
	I1124 04:19:16.790139  498056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:19:16.790154  498056 out.go:374] Setting ErrFile to fd 2...
	I1124 04:19:16.790160  498056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:19:16.790476  498056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 04:19:16.790851  498056 out.go:368] Setting JSON to false
	I1124 04:19:16.791783  498056 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10886,"bootTime":1763947071,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 04:19:16.791852  498056 start.go:143] virtualization:  
	I1124 04:19:16.795221  498056 out.go:179] * [newest-cni-543467] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 04:19:16.799296  498056 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 04:19:16.799443  498056 notify.go:221] Checking for updates...
	I1124 04:19:16.806014  498056 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 04:19:16.808909  498056 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:19:16.811586  498056 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	I1124 04:19:16.814476  498056 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 04:19:16.817320  498056 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 04:19:16.820610  498056 config.go:182] Loaded profile config "newest-cni-543467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:19:16.821307  498056 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 04:19:16.853580  498056 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 04:19:16.853689  498056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:19:16.922831  498056 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 04:19:16.906500928 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:19:16.922933  498056 docker.go:319] overlay module found
	I1124 04:19:16.926067  498056 out.go:179] * Using the docker driver based on existing profile
	I1124 04:19:16.928941  498056 start.go:309] selected driver: docker
	I1124 04:19:16.928959  498056 start.go:927] validating driver "docker" against &{Name:newest-cni-543467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-543467 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:19:16.929082  498056 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 04:19:16.929825  498056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:19:16.997005  498056 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 04:19:16.987250413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:19:16.997340  498056 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 04:19:16.997373  498056 cni.go:84] Creating CNI manager for ""
	I1124 04:19:16.997433  498056 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:19:16.997474  498056 start.go:353] cluster config:
	{Name:newest-cni-543467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-543467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:19:17.001072  498056 out.go:179] * Starting "newest-cni-543467" primary control-plane node in "newest-cni-543467" cluster
	I1124 04:19:17.004490  498056 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 04:19:17.007461  498056 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 04:19:17.010289  498056 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:19:17.010358  498056 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 04:19:17.010373  498056 cache.go:65] Caching tarball of preloaded images
	I1124 04:19:17.010377  498056 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 04:19:17.010566  498056 preload.go:238] Found /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 04:19:17.010579  498056 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 04:19:17.010689  498056 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/config.json ...
	I1124 04:19:17.031493  498056 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 04:19:17.031515  498056 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 04:19:17.031530  498056 cache.go:243] Successfully downloaded all kic artifacts
	I1124 04:19:17.031561  498056 start.go:360] acquireMachinesLock for newest-cni-543467: {Name:mk49235894ca4bdab744b09877359a6e0584cafb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 04:19:17.031632  498056 start.go:364] duration metric: took 38.482µs to acquireMachinesLock for "newest-cni-543467"
	I1124 04:19:17.031656  498056 start.go:96] Skipping create...Using existing machine configuration
	I1124 04:19:17.031662  498056 fix.go:54] fixHost starting: 
	I1124 04:19:17.031927  498056 cli_runner.go:164] Run: docker container inspect newest-cni-543467 --format={{.State.Status}}
	I1124 04:19:17.050729  498056 fix.go:112] recreateIfNeeded on newest-cni-543467: state=Stopped err=<nil>
	W1124 04:19:17.050761  498056 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 24 04:19:09 default-k8s-diff-port-303179 crio[837]: time="2025-11-24T04:19:09.098746618Z" level=info msg="Created container be4baa61f7b2d16dbf38f5707ea1a1e021969b07c0378b435b03007a9bc5e819: kube-system/coredns-66bc5c9577-jtn7v/coredns" id=dba6107b-8c25-4ea1-a270-c3bfa988d396 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:19:09 default-k8s-diff-port-303179 crio[837]: time="2025-11-24T04:19:09.103875345Z" level=info msg="Starting container: be4baa61f7b2d16dbf38f5707ea1a1e021969b07c0378b435b03007a9bc5e819" id=e6d34f9e-73aa-4233-b9bb-f8fe2864eaac name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:19:09 default-k8s-diff-port-303179 crio[837]: time="2025-11-24T04:19:09.123708694Z" level=info msg="Started container" PID=1735 containerID=be4baa61f7b2d16dbf38f5707ea1a1e021969b07c0378b435b03007a9bc5e819 description=kube-system/coredns-66bc5c9577-jtn7v/coredns id=e6d34f9e-73aa-4233-b9bb-f8fe2864eaac name=/runtime.v1.RuntimeService/StartContainer sandboxID=6c0cbf8d3e0bbc12042285c66daeb0d0ed7c9dea5fc0f032b992c7295be18f34
	Nov 24 04:19:12 default-k8s-diff-port-303179 crio[837]: time="2025-11-24T04:19:12.426534556Z" level=info msg="Running pod sandbox: default/busybox/POD" id=4bee9627-c146-40cb-ba22-490f7218fa16 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 04:19:12 default-k8s-diff-port-303179 crio[837]: time="2025-11-24T04:19:12.426650283Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:19:12 default-k8s-diff-port-303179 crio[837]: time="2025-11-24T04:19:12.432566474Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8e8bce9c611a0922d4c7092c5ded31c1d3908f85ae51a1049b200f358966b5b1 UID:aa301f34-6d9a-43c3-879c-d900c3ba9020 NetNS:/var/run/netns/6c088ce6-993e-4008-8c79-71553c9275eb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d148}] Aliases:map[]}"
	Nov 24 04:19:12 default-k8s-diff-port-303179 crio[837]: time="2025-11-24T04:19:12.432606556Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 24 04:19:12 default-k8s-diff-port-303179 crio[837]: time="2025-11-24T04:19:12.44817774Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8e8bce9c611a0922d4c7092c5ded31c1d3908f85ae51a1049b200f358966b5b1 UID:aa301f34-6d9a-43c3-879c-d900c3ba9020 NetNS:/var/run/netns/6c088ce6-993e-4008-8c79-71553c9275eb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d148}] Aliases:map[]}"
	Nov 24 04:19:12 default-k8s-diff-port-303179 crio[837]: time="2025-11-24T04:19:12.448342559Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 24 04:19:12 default-k8s-diff-port-303179 crio[837]: time="2025-11-24T04:19:12.454803322Z" level=info msg="Ran pod sandbox 8e8bce9c611a0922d4c7092c5ded31c1d3908f85ae51a1049b200f358966b5b1 with infra container: default/busybox/POD" id=4bee9627-c146-40cb-ba22-490f7218fa16 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 04:19:12 default-k8s-diff-port-303179 crio[837]: time="2025-11-24T04:19:12.456711661Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0b611d8b-261b-4ffe-b3a0-239ded7e4abc name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:19:12 default-k8s-diff-port-303179 crio[837]: time="2025-11-24T04:19:12.456954938Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=0b611d8b-261b-4ffe-b3a0-239ded7e4abc name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:19:12 default-k8s-diff-port-303179 crio[837]: time="2025-11-24T04:19:12.457082505Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=0b611d8b-261b-4ffe-b3a0-239ded7e4abc name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:19:12 default-k8s-diff-port-303179 crio[837]: time="2025-11-24T04:19:12.458507458Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e089c450-db85-44c2-9172-ec035c7cbce8 name=/runtime.v1.ImageService/PullImage
	Nov 24 04:19:12 default-k8s-diff-port-303179 crio[837]: time="2025-11-24T04:19:12.465413305Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 04:19:14 default-k8s-diff-port-303179 crio[837]: time="2025-11-24T04:19:14.681147684Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=e089c450-db85-44c2-9172-ec035c7cbce8 name=/runtime.v1.ImageService/PullImage
	Nov 24 04:19:14 default-k8s-diff-port-303179 crio[837]: time="2025-11-24T04:19:14.682415966Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e61c839c-85a2-4c2e-b0a8-8801ab88587e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:19:14 default-k8s-diff-port-303179 crio[837]: time="2025-11-24T04:19:14.687118538Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ae1f16d2-4a1f-43e4-b6c1-cb203d186492 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:19:14 default-k8s-diff-port-303179 crio[837]: time="2025-11-24T04:19:14.695003901Z" level=info msg="Creating container: default/busybox/busybox" id=536966e8-9d18-440b-9954-0cbf719f9f63 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:19:14 default-k8s-diff-port-303179 crio[837]: time="2025-11-24T04:19:14.695267978Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:19:14 default-k8s-diff-port-303179 crio[837]: time="2025-11-24T04:19:14.704773915Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:19:14 default-k8s-diff-port-303179 crio[837]: time="2025-11-24T04:19:14.70542381Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:19:14 default-k8s-diff-port-303179 crio[837]: time="2025-11-24T04:19:14.729079195Z" level=info msg="Created container 8737fe564ba1e95e4c7e613696a4fc5f040443405a9e7a5852acc1473cfd6d9e: default/busybox/busybox" id=536966e8-9d18-440b-9954-0cbf719f9f63 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:19:14 default-k8s-diff-port-303179 crio[837]: time="2025-11-24T04:19:14.730215668Z" level=info msg="Starting container: 8737fe564ba1e95e4c7e613696a4fc5f040443405a9e7a5852acc1473cfd6d9e" id=acfe0955-7f21-49d1-9ced-34e30a1ca7f0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:19:14 default-k8s-diff-port-303179 crio[837]: time="2025-11-24T04:19:14.732396298Z" level=info msg="Started container" PID=1793 containerID=8737fe564ba1e95e4c7e613696a4fc5f040443405a9e7a5852acc1473cfd6d9e description=default/busybox/busybox id=acfe0955-7f21-49d1-9ced-34e30a1ca7f0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8e8bce9c611a0922d4c7092c5ded31c1d3908f85ae51a1049b200f358966b5b1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	8737fe564ba1e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   8e8bce9c611a0       busybox                                                default
	be4baa61f7b2d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   6c0cbf8d3e0bb       coredns-66bc5c9577-jtn7v                               kube-system
	a1a398ba10a6d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   e7c48ca7bac4b       storage-provisioner                                    kube-system
	a4e6213f7bbed       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      53 seconds ago       Running             kube-proxy                0                   4fd45835d3b8d       kube-proxy-dxbvb                                       kube-system
	8528f2dd9ce9f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      53 seconds ago       Running             kindnet-cni               0                   273596313c9e3       kindnet-wpp6p                                          kube-system
	14f3807825ebf       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   5fde41b1f0b46       kube-controller-manager-default-k8s-diff-port-303179   kube-system
	af636d3350051       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   51726fa26826c       etcd-default-k8s-diff-port-303179                      kube-system
	2dd6cb11e9890       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   5294ff9df2d70       kube-apiserver-default-k8s-diff-port-303179            kube-system
	8b4389062afcb       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   24346bc53f333       kube-scheduler-default-k8s-diff-port-303179            kube-system
	
	
	==> coredns [be4baa61f7b2d16dbf38f5707ea1a1e021969b07c0378b435b03007a9bc5e819] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34534 - 26244 "HINFO IN 3710754763353173227.2030898237646968350. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024624267s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-303179
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-303179
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=default-k8s-diff-port-303179
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T04_18_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 04:18:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-303179
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 04:19:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 04:19:08 +0000   Mon, 24 Nov 2025 04:18:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 04:19:08 +0000   Mon, 24 Nov 2025 04:18:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 04:19:08 +0000   Mon, 24 Nov 2025 04:18:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 04:19:08 +0000   Mon, 24 Nov 2025 04:19:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-303179
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 304a86241bf1bbb85bd31db5692386d7
	  System UUID:                0604e81b-b009-43d1-b54f-04b6a69cede9
	  Boot ID:                    e6ca431c-3a35-478f-87f6-f49cc4bc8a65
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-jtn7v                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-default-k8s-diff-port-303179                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         62s
	  kube-system                 kindnet-wpp6p                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-default-k8s-diff-port-303179             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-303179    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-dxbvb                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-default-k8s-diff-port-303179             100m (5%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 53s   kube-proxy       
	  Normal   Starting                 61s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s   kubelet          Node default-k8s-diff-port-303179 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s   kubelet          Node default-k8s-diff-port-303179 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s   kubelet          Node default-k8s-diff-port-303179 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s   node-controller  Node default-k8s-diff-port-303179 event: Registered Node default-k8s-diff-port-303179 in Controller
	  Normal   NodeReady                14s   kubelet          Node default-k8s-diff-port-303179 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 03:56] overlayfs: idmapped layers are currently not supported
	[Nov24 03:57] overlayfs: idmapped layers are currently not supported
	[  +3.077077] overlayfs: idmapped layers are currently not supported
	[Nov24 03:58] overlayfs: idmapped layers are currently not supported
	[ +17.825126] overlayfs: idmapped layers are currently not supported
	[Nov24 03:59] overlayfs: idmapped layers are currently not supported
	[ +28.164324] overlayfs: idmapped layers are currently not supported
	[Nov24 04:01] overlayfs: idmapped layers are currently not supported
	[Nov24 04:02] overlayfs: idmapped layers are currently not supported
	[Nov24 04:04] overlayfs: idmapped layers are currently not supported
	[Nov24 04:05] overlayfs: idmapped layers are currently not supported
	[Nov24 04:06] overlayfs: idmapped layers are currently not supported
	[Nov24 04:08] overlayfs: idmapped layers are currently not supported
	[Nov24 04:10] overlayfs: idmapped layers are currently not supported
	[Nov24 04:11] overlayfs: idmapped layers are currently not supported
	[ +23.918932] overlayfs: idmapped layers are currently not supported
	[Nov24 04:12] overlayfs: idmapped layers are currently not supported
	[ +35.202347] overlayfs: idmapped layers are currently not supported
	[Nov24 04:13] overlayfs: idmapped layers are currently not supported
	[Nov24 04:15] overlayfs: idmapped layers are currently not supported
	[ +47.476343] overlayfs: idmapped layers are currently not supported
	[Nov24 04:16] overlayfs: idmapped layers are currently not supported
	[Nov24 04:17] overlayfs: idmapped layers are currently not supported
	[Nov24 04:18] overlayfs: idmapped layers are currently not supported
	[ +43.060353] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [af636d335005122d4f0c933f883bd9a28a472ee2352cbddc80b6b54c6811af52] <==
	{"level":"warn","ts":"2025-11-24T04:18:16.994325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:18:17.023844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:18:17.042501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:18:17.057068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:18:17.088435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:18:17.102689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:18:17.120193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:18:17.143066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:18:17.158353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:18:17.175213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:18:17.198777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:18:17.215202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:18:17.234678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:18:17.250960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:18:17.265213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:18:17.296103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:18:17.311495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:18:17.333269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:18:17.345819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:18:17.363064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:18:17.387819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:18:17.451568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:18:17.473403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:18:17.493467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:18:17.662572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58292","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 04:19:22 up  3:01,  0 user,  load average: 3.12, 3.31, 2.88
	Linux default-k8s-diff-port-303179 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8528f2dd9ce9fb782e6a395c7b8e4085e74a15c180119a6aec0aa75149cd27cc] <==
	I1124 04:18:28.119695       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 04:18:28.119954       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 04:18:28.120078       1 main.go:148] setting mtu 1500 for CNI 
	I1124 04:18:28.120090       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 04:18:28.120099       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T04:18:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 04:18:28.331680       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 04:18:28.331948       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 04:18:28.332009       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 04:18:28.332196       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 04:18:58.321558       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 04:18:58.321675       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 04:18:58.321762       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 04:18:58.321845       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1124 04:18:59.932180       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 04:18:59.932289       1 metrics.go:72] Registering metrics
	I1124 04:18:59.932384       1 controller.go:711] "Syncing nftables rules"
	I1124 04:19:08.326622       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 04:19:08.326672       1 main.go:301] handling current node
	I1124 04:19:18.320605       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 04:19:18.320740       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2dd6cb11e9890f5cd4a9438bcc39dd0ebed3e306850ef70607af78ef85eee79d] <==
	I1124 04:18:18.973583       1 policy_source.go:240] refreshing policies
	I1124 04:18:18.973930       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 04:18:18.993711       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 04:18:19.096154       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 04:18:19.099665       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 04:18:19.119768       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 04:18:19.123145       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 04:18:19.658504       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 04:18:19.664349       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 04:18:19.664438       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 04:18:20.426043       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 04:18:20.538898       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 04:18:20.687612       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 04:18:20.695432       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1124 04:18:20.696734       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 04:18:20.701737       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 04:18:20.800975       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 04:18:21.744672       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 04:18:21.759136       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 04:18:21.782983       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 04:18:26.561585       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 04:18:26.909647       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 04:18:27.043385       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 04:18:27.112511       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1124 04:19:20.244142       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:53630: use of closed network connection
	
	
	==> kube-controller-manager [14f3807825ebf935eb1971b80c4199e3de09fdf24ba78893e569be13b78018c5] <==
	I1124 04:18:25.877955       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 04:18:25.881856       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 04:18:25.887697       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 04:18:25.893030       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 04:18:25.902217       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 04:18:25.902342       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 04:18:25.902387       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 04:18:25.902440       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 04:18:25.902509       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 04:18:25.902903       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 04:18:25.903906       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 04:18:25.904087       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 04:18:25.904614       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 04:18:25.911492       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 04:18:25.912541       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 04:18:25.912599       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 04:18:25.911567       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 04:18:25.917663       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 04:18:25.919754       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 04:18:25.921071       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-303179" podCIDRs=["10.244.0.0/24"]
	I1124 04:18:25.930553       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 04:18:25.947667       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 04:18:25.949232       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 04:18:25.949298       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 04:19:10.868097       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a4e6213f7bbede2ec1e02cf47f804df61713d29e88d1cee3407c3262572cee31] <==
	I1124 04:18:28.233215       1 server_linux.go:53] "Using iptables proxy"
	I1124 04:18:28.463723       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 04:18:28.579011       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 04:18:28.579605       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 04:18:28.579760       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 04:18:28.621811       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 04:18:28.621941       1 server_linux.go:132] "Using iptables Proxier"
	I1124 04:18:28.627533       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 04:18:28.627925       1 server.go:527] "Version info" version="v1.34.1"
	I1124 04:18:28.628266       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:18:28.630684       1 config.go:200] "Starting service config controller"
	I1124 04:18:28.630743       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 04:18:28.630787       1 config.go:106] "Starting endpoint slice config controller"
	I1124 04:18:28.630813       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 04:18:28.630855       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 04:18:28.630881       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 04:18:28.631543       1 config.go:309] "Starting node config controller"
	I1124 04:18:28.631597       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 04:18:28.631625       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 04:18:28.731572       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 04:18:28.731621       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 04:18:28.731645       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8b4389062afcbcca350063b822a1b381d3b4e3a9dcf8f7032caf33f24ee3442a] <==
	E1124 04:18:19.083202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 04:18:19.083277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 04:18:19.083354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 04:18:19.084416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 04:18:19.084470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 04:18:19.084557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 04:18:19.085103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 04:18:19.085177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 04:18:19.085214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 04:18:19.085386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 04:18:19.085539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 04:18:19.085602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 04:18:19.085699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 04:18:19.912377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 04:18:19.976006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 04:18:19.981556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 04:18:20.019584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 04:18:20.045527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 04:18:20.064595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 04:18:20.075596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 04:18:20.104771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 04:18:20.104869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 04:18:20.166685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 04:18:20.182959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1124 04:18:22.844189       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 04:18:23 default-k8s-diff-port-303179 kubelet[1304]: I1124 04:18:23.065450    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-303179" podStartSLOduration=1.065301608 podStartE2EDuration="1.065301608s" podCreationTimestamp="2025-11-24 04:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 04:18:23.008874345 +0000 UTC m=+1.441196677" watchObservedRunningTime="2025-11-24 04:18:23.065301608 +0000 UTC m=+1.497623940"
	Nov 24 04:18:25 default-k8s-diff-port-303179 kubelet[1304]: I1124 04:18:25.933167    1304 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 04:18:25 default-k8s-diff-port-303179 kubelet[1304]: I1124 04:18:25.936304    1304 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 04:18:26 default-k8s-diff-port-303179 kubelet[1304]: E1124 04:18:26.620529    1304 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:default-k8s-diff-port-303179\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'default-k8s-diff-port-303179' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 24 04:18:26 default-k8s-diff-port-303179 kubelet[1304]: I1124 04:18:26.670906    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a1f5799-1a90-4c0a-a0a3-9508d80fd8f3-lib-modules\") pod \"kindnet-wpp6p\" (UID: \"0a1f5799-1a90-4c0a-a0a3-9508d80fd8f3\") " pod="kube-system/kindnet-wpp6p"
	Nov 24 04:18:26 default-k8s-diff-port-303179 kubelet[1304]: I1124 04:18:26.670954    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsv8k\" (UniqueName: \"kubernetes.io/projected/0a1f5799-1a90-4c0a-a0a3-9508d80fd8f3-kube-api-access-jsv8k\") pod \"kindnet-wpp6p\" (UID: \"0a1f5799-1a90-4c0a-a0a3-9508d80fd8f3\") " pod="kube-system/kindnet-wpp6p"
	Nov 24 04:18:26 default-k8s-diff-port-303179 kubelet[1304]: I1124 04:18:26.670984    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a1f5799-1a90-4c0a-a0a3-9508d80fd8f3-xtables-lock\") pod \"kindnet-wpp6p\" (UID: \"0a1f5799-1a90-4c0a-a0a3-9508d80fd8f3\") " pod="kube-system/kindnet-wpp6p"
	Nov 24 04:18:26 default-k8s-diff-port-303179 kubelet[1304]: I1124 04:18:26.671002    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0a1f5799-1a90-4c0a-a0a3-9508d80fd8f3-cni-cfg\") pod \"kindnet-wpp6p\" (UID: \"0a1f5799-1a90-4c0a-a0a3-9508d80fd8f3\") " pod="kube-system/kindnet-wpp6p"
	Nov 24 04:18:26 default-k8s-diff-port-303179 kubelet[1304]: I1124 04:18:26.773152    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/24177ca5-eb2f-4ac2-a32c-d384781bad58-kube-proxy\") pod \"kube-proxy-dxbvb\" (UID: \"24177ca5-eb2f-4ac2-a32c-d384781bad58\") " pod="kube-system/kube-proxy-dxbvb"
	Nov 24 04:18:26 default-k8s-diff-port-303179 kubelet[1304]: I1124 04:18:26.773215    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g52td\" (UniqueName: \"kubernetes.io/projected/24177ca5-eb2f-4ac2-a32c-d384781bad58-kube-api-access-g52td\") pod \"kube-proxy-dxbvb\" (UID: \"24177ca5-eb2f-4ac2-a32c-d384781bad58\") " pod="kube-system/kube-proxy-dxbvb"
	Nov 24 04:18:26 default-k8s-diff-port-303179 kubelet[1304]: I1124 04:18:26.773259    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24177ca5-eb2f-4ac2-a32c-d384781bad58-xtables-lock\") pod \"kube-proxy-dxbvb\" (UID: \"24177ca5-eb2f-4ac2-a32c-d384781bad58\") " pod="kube-system/kube-proxy-dxbvb"
	Nov 24 04:18:26 default-k8s-diff-port-303179 kubelet[1304]: I1124 04:18:26.773303    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24177ca5-eb2f-4ac2-a32c-d384781bad58-lib-modules\") pod \"kube-proxy-dxbvb\" (UID: \"24177ca5-eb2f-4ac2-a32c-d384781bad58\") " pod="kube-system/kube-proxy-dxbvb"
	Nov 24 04:18:27 default-k8s-diff-port-303179 kubelet[1304]: I1124 04:18:27.602750    1304 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 24 04:18:27 default-k8s-diff-port-303179 kubelet[1304]: W1124 04:18:27.832188    1304 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c6af048d3f8eed722eea7223c0aae47b7c545b8922e6d139f9580c8eda525748/crio-273596313c9e33ce5aadb4a81d6e5a2df12640c606ae3a56a487bd36f900c9ad WatchSource:0}: Error finding container 273596313c9e33ce5aadb4a81d6e5a2df12640c606ae3a56a487bd36f900c9ad: Status 404 returned error can't find the container with id 273596313c9e33ce5aadb4a81d6e5a2df12640c606ae3a56a487bd36f900c9ad
	Nov 24 04:18:28 default-k8s-diff-port-303179 kubelet[1304]: I1124 04:18:28.993926    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-wpp6p" podStartSLOduration=2.993906831 podStartE2EDuration="2.993906831s" podCreationTimestamp="2025-11-24 04:18:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 04:18:28.969275045 +0000 UTC m=+7.401597377" watchObservedRunningTime="2025-11-24 04:18:28.993906831 +0000 UTC m=+7.426229155"
	Nov 24 04:18:29 default-k8s-diff-port-303179 kubelet[1304]: I1124 04:18:29.020179    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dxbvb" podStartSLOduration=3.020158034 podStartE2EDuration="3.020158034s" podCreationTimestamp="2025-11-24 04:18:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 04:18:28.996554618 +0000 UTC m=+7.428876966" watchObservedRunningTime="2025-11-24 04:18:29.020158034 +0000 UTC m=+7.452480366"
	Nov 24 04:19:08 default-k8s-diff-port-303179 kubelet[1304]: I1124 04:19:08.598087    1304 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 04:19:08 default-k8s-diff-port-303179 kubelet[1304]: I1124 04:19:08.788047    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4skl\" (UniqueName: \"kubernetes.io/projected/4d7d1174-e169-4297-a8a2-55a47f03d9d6-kube-api-access-g4skl\") pod \"storage-provisioner\" (UID: \"4d7d1174-e169-4297-a8a2-55a47f03d9d6\") " pod="kube-system/storage-provisioner"
	Nov 24 04:19:08 default-k8s-diff-port-303179 kubelet[1304]: I1124 04:19:08.788110    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzh2x\" (UniqueName: \"kubernetes.io/projected/cd5d148d-8e9e-4bac-a54c-d71637a8cb0c-kube-api-access-jzh2x\") pod \"coredns-66bc5c9577-jtn7v\" (UID: \"cd5d148d-8e9e-4bac-a54c-d71637a8cb0c\") " pod="kube-system/coredns-66bc5c9577-jtn7v"
	Nov 24 04:19:08 default-k8s-diff-port-303179 kubelet[1304]: I1124 04:19:08.788138    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4d7d1174-e169-4297-a8a2-55a47f03d9d6-tmp\") pod \"storage-provisioner\" (UID: \"4d7d1174-e169-4297-a8a2-55a47f03d9d6\") " pod="kube-system/storage-provisioner"
	Nov 24 04:19:08 default-k8s-diff-port-303179 kubelet[1304]: I1124 04:19:08.788158    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd5d148d-8e9e-4bac-a54c-d71637a8cb0c-config-volume\") pod \"coredns-66bc5c9577-jtn7v\" (UID: \"cd5d148d-8e9e-4bac-a54c-d71637a8cb0c\") " pod="kube-system/coredns-66bc5c9577-jtn7v"
	Nov 24 04:19:10 default-k8s-diff-port-303179 kubelet[1304]: I1124 04:19:10.052106    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.052086463 podStartE2EDuration="42.052086463s" podCreationTimestamp="2025-11-24 04:18:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 04:19:10.035145752 +0000 UTC m=+48.467468092" watchObservedRunningTime="2025-11-24 04:19:10.052086463 +0000 UTC m=+48.484408786"
	Nov 24 04:19:12 default-k8s-diff-port-303179 kubelet[1304]: I1124 04:19:12.115326    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jtn7v" podStartSLOduration=45.11530566 podStartE2EDuration="45.11530566s" podCreationTimestamp="2025-11-24 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 04:19:10.053280955 +0000 UTC m=+48.485603295" watchObservedRunningTime="2025-11-24 04:19:12.11530566 +0000 UTC m=+50.547627992"
	Nov 24 04:19:12 default-k8s-diff-port-303179 kubelet[1304]: I1124 04:19:12.221570    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4cgc\" (UniqueName: \"kubernetes.io/projected/aa301f34-6d9a-43c3-879c-d900c3ba9020-kube-api-access-z4cgc\") pod \"busybox\" (UID: \"aa301f34-6d9a-43c3-879c-d900c3ba9020\") " pod="default/busybox"
	Nov 24 04:19:12 default-k8s-diff-port-303179 kubelet[1304]: W1124 04:19:12.450710    1304 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c6af048d3f8eed722eea7223c0aae47b7c545b8922e6d139f9580c8eda525748/crio-8e8bce9c611a0922d4c7092c5ded31c1d3908f85ae51a1049b200f358966b5b1 WatchSource:0}: Error finding container 8e8bce9c611a0922d4c7092c5ded31c1d3908f85ae51a1049b200f358966b5b1: Status 404 returned error can't find the container with id 8e8bce9c611a0922d4c7092c5ded31c1d3908f85ae51a1049b200f358966b5b1
	
	
	==> storage-provisioner [a1a398ba10a6dcdec6c695210603e84a700d48a624ca8031abe14b2a1f10e7a9] <==
	I1124 04:19:09.070538       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 04:19:09.099062       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 04:19:09.099603       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 04:19:09.106804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:19:09.116828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 04:19:09.117152       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 04:19:09.120443       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-303179_2d55a054-7c4d-48f2-a74c-f7c3e9c4adc8!
	I1124 04:19:09.136326       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"09f66fd8-db14-4a17-8771-4d111bed13aa", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-303179_2d55a054-7c4d-48f2-a74c-f7c3e9c4adc8 became leader
	W1124 04:19:09.140626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:19:09.146183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 04:19:09.222583       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-303179_2d55a054-7c4d-48f2-a74c-f7c3e9c4adc8!
	W1124 04:19:11.149691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:19:11.157138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:19:13.160327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:19:13.164742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:19:15.168454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:19:15.176424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:19:17.179856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:19:17.184613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:19:19.187888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:19:19.193373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:19:21.196630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:19:21.201262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-303179 -n default-k8s-diff-port-303179
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-303179 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-543467 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-543467 --alsologtostderr -v=1: exit status 80 (1.873125215s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-543467 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 04:19:32.822304  500473 out.go:360] Setting OutFile to fd 1 ...
	I1124 04:19:32.822486  500473 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:19:32.822499  500473 out.go:374] Setting ErrFile to fd 2...
	I1124 04:19:32.822505  500473 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:19:32.822814  500473 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 04:19:32.823098  500473 out.go:368] Setting JSON to false
	I1124 04:19:32.823138  500473 mustload.go:66] Loading cluster: newest-cni-543467
	I1124 04:19:32.823577  500473 config.go:182] Loaded profile config "newest-cni-543467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:19:32.824107  500473 cli_runner.go:164] Run: docker container inspect newest-cni-543467 --format={{.State.Status}}
	I1124 04:19:32.843448  500473 host.go:66] Checking if "newest-cni-543467" exists ...
	I1124 04:19:32.843873  500473 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:19:32.912375  500473 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-24 04:19:32.902889164 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:19:32.913140  500473 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763935228-21975/minikube-v1.37.0-1763935228-21975-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763935228-21975-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-543467 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 04:19:32.916755  500473 out.go:179] * Pausing node newest-cni-543467 ... 
	I1124 04:19:32.921014  500473 host.go:66] Checking if "newest-cni-543467" exists ...
	I1124 04:19:32.921368  500473 ssh_runner.go:195] Run: systemctl --version
	I1124 04:19:32.921422  500473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-543467
	I1124 04:19:32.941426  500473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/newest-cni-543467/id_rsa Username:docker}
	I1124 04:19:33.045895  500473 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:19:33.061489  500473 pause.go:52] kubelet running: true
	I1124 04:19:33.061572  500473 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 04:19:33.292107  500473 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 04:19:33.292183  500473 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 04:19:33.357912  500473 cri.go:89] found id: "fd99f976eca993b9b60a234ee0fa68afe7273c5df82607dfca654e378d0f1921"
	I1124 04:19:33.357942  500473 cri.go:89] found id: "201b2c25c22abf0377b85a36b8eb31fe80999b02511bf72b46ea931c57632c71"
	I1124 04:19:33.357948  500473 cri.go:89] found id: "5527fb66d2fe4207c38f35db01c1200259d25ccacd80469383a78b9131d8068a"
	I1124 04:19:33.357951  500473 cri.go:89] found id: "c13ec21534f6da44a392b7f2813576f9129b5e5a987bd880e4d67adb534318e3"
	I1124 04:19:33.357955  500473 cri.go:89] found id: "1561bf2881e2d625df271f5d46170b77c2fd6581ce6cf6a9bec4d003e044ec02"
	I1124 04:19:33.357958  500473 cri.go:89] found id: "037326d6daae293670fae3227d4c4b9bd31a79a31a1fbf7812a603c516e804eb"
	I1124 04:19:33.357961  500473 cri.go:89] found id: ""
	I1124 04:19:33.358013  500473 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 04:19:33.368811  500473 retry.go:31] will retry after 283.849389ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:19:33Z" level=error msg="open /run/runc: no such file or directory"
	I1124 04:19:33.653325  500473 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:19:33.666888  500473 pause.go:52] kubelet running: false
	I1124 04:19:33.666956  500473 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 04:19:33.895965  500473 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 04:19:33.896048  500473 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 04:19:34.008325  500473 cri.go:89] found id: "fd99f976eca993b9b60a234ee0fa68afe7273c5df82607dfca654e378d0f1921"
	I1124 04:19:34.008353  500473 cri.go:89] found id: "201b2c25c22abf0377b85a36b8eb31fe80999b02511bf72b46ea931c57632c71"
	I1124 04:19:34.008360  500473 cri.go:89] found id: "5527fb66d2fe4207c38f35db01c1200259d25ccacd80469383a78b9131d8068a"
	I1124 04:19:34.008364  500473 cri.go:89] found id: "c13ec21534f6da44a392b7f2813576f9129b5e5a987bd880e4d67adb534318e3"
	I1124 04:19:34.008367  500473 cri.go:89] found id: "1561bf2881e2d625df271f5d46170b77c2fd6581ce6cf6a9bec4d003e044ec02"
	I1124 04:19:34.008371  500473 cri.go:89] found id: "037326d6daae293670fae3227d4c4b9bd31a79a31a1fbf7812a603c516e804eb"
	I1124 04:19:34.008374  500473 cri.go:89] found id: ""
	I1124 04:19:34.008438  500473 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 04:19:34.022864  500473 retry.go:31] will retry after 297.151469ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:19:34Z" level=error msg="open /run/runc: no such file or directory"
	I1124 04:19:34.320368  500473 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:19:34.334629  500473 pause.go:52] kubelet running: false
	I1124 04:19:34.334755  500473 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 04:19:34.540059  500473 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 04:19:34.540193  500473 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 04:19:34.612493  500473 cri.go:89] found id: "fd99f976eca993b9b60a234ee0fa68afe7273c5df82607dfca654e378d0f1921"
	I1124 04:19:34.612527  500473 cri.go:89] found id: "201b2c25c22abf0377b85a36b8eb31fe80999b02511bf72b46ea931c57632c71"
	I1124 04:19:34.612533  500473 cri.go:89] found id: "5527fb66d2fe4207c38f35db01c1200259d25ccacd80469383a78b9131d8068a"
	I1124 04:19:34.612537  500473 cri.go:89] found id: "c13ec21534f6da44a392b7f2813576f9129b5e5a987bd880e4d67adb534318e3"
	I1124 04:19:34.612540  500473 cri.go:89] found id: "1561bf2881e2d625df271f5d46170b77c2fd6581ce6cf6a9bec4d003e044ec02"
	I1124 04:19:34.612544  500473 cri.go:89] found id: "037326d6daae293670fae3227d4c4b9bd31a79a31a1fbf7812a603c516e804eb"
	I1124 04:19:34.612566  500473 cri.go:89] found id: ""
	I1124 04:19:34.612624  500473 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 04:19:34.626854  500473 out.go:203] 
	W1124 04:19:34.629732  500473 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:19:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:19:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 04:19:34.629765  500473 out.go:285] * 
	* 
	W1124 04:19:34.636188  500473 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 04:19:34.641063  500473 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-543467 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-543467
helpers_test.go:243: (dbg) docker inspect newest-cni-543467:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d5de64ccb4ee82439c3eb50ca5c7d34e9cf9c4369188379e453a3607df8d63aa",
	        "Created": "2025-11-24T04:18:39.041842209Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 498186,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T04:19:17.085720498Z",
	            "FinishedAt": "2025-11-24T04:19:15.994152307Z"
	        },
	        "Image": "sha256:fbb44bc62521f331457dff002aaa5e1e27856f9e53853b3b3ee62969be454028",
	        "ResolvConfPath": "/var/lib/docker/containers/d5de64ccb4ee82439c3eb50ca5c7d34e9cf9c4369188379e453a3607df8d63aa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d5de64ccb4ee82439c3eb50ca5c7d34e9cf9c4369188379e453a3607df8d63aa/hostname",
	        "HostsPath": "/var/lib/docker/containers/d5de64ccb4ee82439c3eb50ca5c7d34e9cf9c4369188379e453a3607df8d63aa/hosts",
	        "LogPath": "/var/lib/docker/containers/d5de64ccb4ee82439c3eb50ca5c7d34e9cf9c4369188379e453a3607df8d63aa/d5de64ccb4ee82439c3eb50ca5c7d34e9cf9c4369188379e453a3607df8d63aa-json.log",
	        "Name": "/newest-cni-543467",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-543467:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-543467",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d5de64ccb4ee82439c3eb50ca5c7d34e9cf9c4369188379e453a3607df8d63aa",
	                "LowerDir": "/var/lib/docker/overlay2/508f75bd78cd9ee664b18d9c770c9f2ff20973534449594a6f1b58570079d85b-init/diff:/var/lib/docker/overlay2/d0aa28be488ed1454a92ae6ce1d851cbc3e880fd8a322d63788b1835a346ec13/diff",
	                "MergedDir": "/var/lib/docker/overlay2/508f75bd78cd9ee664b18d9c770c9f2ff20973534449594a6f1b58570079d85b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/508f75bd78cd9ee664b18d9c770c9f2ff20973534449594a6f1b58570079d85b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/508f75bd78cd9ee664b18d9c770c9f2ff20973534449594a6f1b58570079d85b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-543467",
	                "Source": "/var/lib/docker/volumes/newest-cni-543467/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-543467",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-543467",
	                "name.minikube.sigs.k8s.io": "newest-cni-543467",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "917d9c363511e57bd2672f8f916e3a1de05f4de502978198f50390cdeb61816f",
	            "SandboxKey": "/var/run/docker/netns/917d9c363511",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-543467": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:e5:e1:93:dd:99",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fbc2fa8442ac0221bba9fd37174f7543e2a4c35cf01fdb513ae8d608db3a956a",
	                    "EndpointID": "ac6b4db944a3c498d87e0e4cb9c902a6de1931db90bec8305fa6cdccec6a7879",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-543467",
	                        "d5de64ccb4ee"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-543467 -n newest-cni-543467
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-543467 -n newest-cni-543467: exit status 2 (340.243248ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-543467 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-543467 logs -n 25: (1.467483259s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-600301 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │ 24 Nov 25 04:16 UTC │
	│ start   │ -p no-preload-600301 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:16 UTC │ 24 Nov 25 04:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-520529 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │                     │
	│ stop    │ -p embed-certs-520529 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-520529 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ start   │ -p embed-certs-520529 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:18 UTC │
	│ image   │ no-preload-600301 image list --format=json                                                                                                                                                                                                    │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ pause   │ -p no-preload-600301 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │                     │
	│ delete  │ -p no-preload-600301                                                                                                                                                                                                                          │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ delete  │ -p no-preload-600301                                                                                                                                                                                                                          │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ delete  │ -p disable-driver-mounts-995056                                                                                                                                                                                                               │ disable-driver-mounts-995056 │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ start   │ -p default-k8s-diff-port-303179 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-303179 │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:19 UTC │
	│ image   │ embed-certs-520529 image list --format=json                                                                                                                                                                                                   │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │ 24 Nov 25 04:18 UTC │
	│ pause   │ -p embed-certs-520529 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │                     │
	│ delete  │ -p embed-certs-520529                                                                                                                                                                                                                         │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │ 24 Nov 25 04:18 UTC │
	│ delete  │ -p embed-certs-520529                                                                                                                                                                                                                         │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │ 24 Nov 25 04:18 UTC │
	│ start   │ -p newest-cni-543467 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │ 24 Nov 25 04:19 UTC │
	│ addons  │ enable metrics-server -p newest-cni-543467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │                     │
	│ stop    │ -p newest-cni-543467 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:19 UTC │
	│ addons  │ enable dashboard -p newest-cni-543467 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:19 UTC │
	│ start   │ -p newest-cni-543467 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:19 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-303179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-303179 │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-303179 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-303179 │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │                     │
	│ image   │ newest-cni-543467 image list --format=json                                                                                                                                                                                                    │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:19 UTC │
	│ pause   │ -p newest-cni-543467 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 04:19:16
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 04:19:16.789796  498056 out.go:360] Setting OutFile to fd 1 ...
	I1124 04:19:16.790139  498056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:19:16.790154  498056 out.go:374] Setting ErrFile to fd 2...
	I1124 04:19:16.790160  498056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:19:16.790476  498056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 04:19:16.790851  498056 out.go:368] Setting JSON to false
	I1124 04:19:16.791783  498056 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10886,"bootTime":1763947071,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 04:19:16.791852  498056 start.go:143] virtualization:  
	I1124 04:19:16.795221  498056 out.go:179] * [newest-cni-543467] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 04:19:16.799296  498056 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 04:19:16.799443  498056 notify.go:221] Checking for updates...
	I1124 04:19:16.806014  498056 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 04:19:16.808909  498056 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:19:16.811586  498056 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	I1124 04:19:16.814476  498056 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 04:19:16.817320  498056 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 04:19:16.820610  498056 config.go:182] Loaded profile config "newest-cni-543467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:19:16.821307  498056 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 04:19:16.853580  498056 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 04:19:16.853689  498056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:19:16.922831  498056 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 04:19:16.906500928 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:19:16.922933  498056 docker.go:319] overlay module found
	I1124 04:19:16.926067  498056 out.go:179] * Using the docker driver based on existing profile
	I1124 04:19:16.928941  498056 start.go:309] selected driver: docker
	I1124 04:19:16.928959  498056 start.go:927] validating driver "docker" against &{Name:newest-cni-543467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-543467 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:19:16.929082  498056 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 04:19:16.929825  498056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:19:16.997005  498056 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 04:19:16.987250413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:19:16.997340  498056 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 04:19:16.997373  498056 cni.go:84] Creating CNI manager for ""
	I1124 04:19:16.997433  498056 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:19:16.997474  498056 start.go:353] cluster config:
	{Name:newest-cni-543467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-543467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:19:17.001072  498056 out.go:179] * Starting "newest-cni-543467" primary control-plane node in "newest-cni-543467" cluster
	I1124 04:19:17.004490  498056 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 04:19:17.007461  498056 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 04:19:17.010289  498056 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:19:17.010358  498056 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 04:19:17.010373  498056 cache.go:65] Caching tarball of preloaded images
	I1124 04:19:17.010377  498056 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 04:19:17.010566  498056 preload.go:238] Found /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 04:19:17.010579  498056 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 04:19:17.010689  498056 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/config.json ...
	I1124 04:19:17.031493  498056 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 04:19:17.031515  498056 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 04:19:17.031530  498056 cache.go:243] Successfully downloaded all kic artifacts
	I1124 04:19:17.031561  498056 start.go:360] acquireMachinesLock for newest-cni-543467: {Name:mk49235894ca4bdab744b09877359a6e0584cafb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 04:19:17.031632  498056 start.go:364] duration metric: took 38.482µs to acquireMachinesLock for "newest-cni-543467"
	I1124 04:19:17.031656  498056 start.go:96] Skipping create...Using existing machine configuration
	I1124 04:19:17.031662  498056 fix.go:54] fixHost starting: 
	I1124 04:19:17.031927  498056 cli_runner.go:164] Run: docker container inspect newest-cni-543467 --format={{.State.Status}}
	I1124 04:19:17.050729  498056 fix.go:112] recreateIfNeeded on newest-cni-543467: state=Stopped err=<nil>
	W1124 04:19:17.050761  498056 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 04:19:17.053953  498056 out.go:252] * Restarting existing docker container for "newest-cni-543467" ...
	I1124 04:19:17.054052  498056 cli_runner.go:164] Run: docker start newest-cni-543467
	I1124 04:19:17.331671  498056 cli_runner.go:164] Run: docker container inspect newest-cni-543467 --format={{.State.Status}}
	I1124 04:19:17.358107  498056 kic.go:430] container "newest-cni-543467" state is running.
	I1124 04:19:17.358610  498056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-543467
	I1124 04:19:17.382371  498056 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/config.json ...
	I1124 04:19:17.382641  498056 machine.go:94] provisionDockerMachine start ...
	I1124 04:19:17.382917  498056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-543467
	I1124 04:19:17.403727  498056 main.go:143] libmachine: Using SSH client type: native
	I1124 04:19:17.404057  498056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1124 04:19:17.404066  498056 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 04:19:17.404810  498056 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51658->127.0.0.1:33461: read: connection reset by peer
	I1124 04:19:20.569894  498056 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-543467
	
	I1124 04:19:20.569926  498056 ubuntu.go:182] provisioning hostname "newest-cni-543467"
	I1124 04:19:20.569996  498056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-543467
	I1124 04:19:20.590532  498056 main.go:143] libmachine: Using SSH client type: native
	I1124 04:19:20.590845  498056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1124 04:19:20.590856  498056 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-543467 && echo "newest-cni-543467" | sudo tee /etc/hostname
	I1124 04:19:20.787751  498056 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-543467
	
	I1124 04:19:20.787839  498056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-543467
	I1124 04:19:20.815760  498056 main.go:143] libmachine: Using SSH client type: native
	I1124 04:19:20.816082  498056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1124 04:19:20.816101  498056 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-543467' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-543467/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-543467' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 04:19:20.984716  498056 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 04:19:20.984750  498056 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-289526/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-289526/.minikube}
	I1124 04:19:20.984774  498056 ubuntu.go:190] setting up certificates
	I1124 04:19:20.984783  498056 provision.go:84] configureAuth start
	I1124 04:19:20.984844  498056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-543467
	I1124 04:19:21.028795  498056 provision.go:143] copyHostCerts
	I1124 04:19:21.029027  498056 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem, removing ...
	I1124 04:19:21.029059  498056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem
	I1124 04:19:21.029174  498056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem (1082 bytes)
	I1124 04:19:21.029480  498056 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem, removing ...
	I1124 04:19:21.029493  498056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem
	I1124 04:19:21.029554  498056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem (1123 bytes)
	I1124 04:19:21.029654  498056 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem, removing ...
	I1124 04:19:21.029660  498056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem
	I1124 04:19:21.029691  498056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem (1675 bytes)
	I1124 04:19:21.029773  498056 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem org=jenkins.newest-cni-543467 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-543467]
	I1124 04:19:21.282863  498056 provision.go:177] copyRemoteCerts
	I1124 04:19:21.282925  498056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 04:19:21.282968  498056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-543467
	I1124 04:19:21.302623  498056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/newest-cni-543467/id_rsa Username:docker}
	I1124 04:19:21.407201  498056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 04:19:21.430566  498056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 04:19:21.457593  498056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 04:19:21.479796  498056 provision.go:87] duration metric: took 494.992583ms to configureAuth
	I1124 04:19:21.479869  498056 ubuntu.go:206] setting minikube options for container-runtime
	I1124 04:19:21.480109  498056 config.go:182] Loaded profile config "newest-cni-543467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:19:21.480252  498056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-543467
	I1124 04:19:21.509756  498056 main.go:143] libmachine: Using SSH client type: native
	I1124 04:19:21.510081  498056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1124 04:19:21.510096  498056 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 04:19:21.905017  498056 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 04:19:21.905068  498056 machine.go:97] duration metric: took 4.52241607s to provisionDockerMachine
	I1124 04:19:21.905080  498056 start.go:293] postStartSetup for "newest-cni-543467" (driver="docker")
	I1124 04:19:21.905099  498056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 04:19:21.905176  498056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 04:19:21.905221  498056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-543467
	I1124 04:19:21.926772  498056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/newest-cni-543467/id_rsa Username:docker}
	I1124 04:19:22.033458  498056 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 04:19:22.038412  498056 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 04:19:22.038440  498056 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 04:19:22.038465  498056 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/addons for local assets ...
	I1124 04:19:22.038526  498056 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/files for local assets ...
	I1124 04:19:22.038601  498056 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem -> 2913892.pem in /etc/ssl/certs
	I1124 04:19:22.038719  498056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 04:19:22.047917  498056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:19:22.072354  498056 start.go:296] duration metric: took 167.251158ms for postStartSetup
	I1124 04:19:22.072496  498056 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 04:19:22.072603  498056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-543467
	I1124 04:19:22.093201  498056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/newest-cni-543467/id_rsa Username:docker}
	I1124 04:19:22.200155  498056 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 04:19:22.206740  498056 fix.go:56] duration metric: took 5.175070858s for fixHost
	I1124 04:19:22.206768  498056 start.go:83] releasing machines lock for "newest-cni-543467", held for 5.175122478s
	I1124 04:19:22.206838  498056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-543467
	I1124 04:19:22.230701  498056 ssh_runner.go:195] Run: cat /version.json
	I1124 04:19:22.230755  498056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-543467
	I1124 04:19:22.230806  498056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 04:19:22.230877  498056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-543467
	I1124 04:19:22.263806  498056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/newest-cni-543467/id_rsa Username:docker}
	I1124 04:19:22.283246  498056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/newest-cni-543467/id_rsa Username:docker}
	I1124 04:19:22.469010  498056 ssh_runner.go:195] Run: systemctl --version
	I1124 04:19:22.477558  498056 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 04:19:22.529197  498056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 04:19:22.534430  498056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 04:19:22.534591  498056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 04:19:22.551205  498056 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 04:19:22.551232  498056 start.go:496] detecting cgroup driver to use...
	I1124 04:19:22.551265  498056 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 04:19:22.551327  498056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 04:19:22.576462  498056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 04:19:22.592499  498056 docker.go:218] disabling cri-docker service (if available) ...
	I1124 04:19:22.592569  498056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 04:19:22.611202  498056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 04:19:22.629832  498056 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 04:19:22.815003  498056 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 04:19:23.031379  498056 docker.go:234] disabling docker service ...
	I1124 04:19:23.031456  498056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 04:19:23.060248  498056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 04:19:23.085404  498056 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 04:19:23.274011  498056 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 04:19:23.435011  498056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 04:19:23.449771  498056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 04:19:23.463957  498056 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 04:19:23.464026  498056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:23.473097  498056 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 04:19:23.473168  498056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:23.482023  498056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:23.491356  498056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:23.500846  498056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 04:19:23.512167  498056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:23.521458  498056 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:23.533533  498056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:23.553599  498056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 04:19:23.563446  498056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 04:19:23.571997  498056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:19:23.760674  498056 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 04:19:23.930381  498056 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 04:19:23.930541  498056 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 04:19:23.934385  498056 start.go:564] Will wait 60s for crictl version
	I1124 04:19:23.934480  498056 ssh_runner.go:195] Run: which crictl
	I1124 04:19:23.937962  498056 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 04:19:23.961699  498056 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 04:19:23.961787  498056 ssh_runner.go:195] Run: crio --version
	I1124 04:19:23.990218  498056 ssh_runner.go:195] Run: crio --version
	I1124 04:19:24.025427  498056 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 04:19:24.028204  498056 cli_runner.go:164] Run: docker network inspect newest-cni-543467 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 04:19:24.045026  498056 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 04:19:24.049119  498056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:19:24.062394  498056 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1124 04:19:24.065200  498056 kubeadm.go:884] updating cluster {Name:newest-cni-543467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-543467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 04:19:24.065355  498056 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:19:24.065427  498056 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:19:24.100437  498056 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:19:24.100466  498056 crio.go:433] Images already preloaded, skipping extraction
	I1124 04:19:24.100539  498056 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:19:24.127049  498056 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:19:24.127076  498056 cache_images.go:86] Images are preloaded, skipping loading
	I1124 04:19:24.127085  498056 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1124 04:19:24.127203  498056 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-543467 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-543467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 04:19:24.127297  498056 ssh_runner.go:195] Run: crio config
	I1124 04:19:24.180559  498056 cni.go:84] Creating CNI manager for ""
	I1124 04:19:24.180584  498056 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:19:24.180603  498056 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1124 04:19:24.180626  498056 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-543467 NodeName:newest-cni-543467 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 04:19:24.181434  498056 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-543467"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 04:19:24.181538  498056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 04:19:24.189336  498056 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 04:19:24.189450  498056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 04:19:24.196897  498056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 04:19:24.209311  498056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 04:19:24.223428  498056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1124 04:19:24.236256  498056 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 04:19:24.239738  498056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:19:24.249643  498056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:19:24.361883  498056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:19:24.380522  498056 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467 for IP: 192.168.76.2
	I1124 04:19:24.380545  498056 certs.go:195] generating shared ca certs ...
	I1124 04:19:24.380564  498056 certs.go:227] acquiring lock for ca certs: {Name:mk13d1a72b5c7901cf99d88e558f62c6b8512807 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:24.380778  498056 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key
	I1124 04:19:24.380846  498056 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key
	I1124 04:19:24.380868  498056 certs.go:257] generating profile certs ...
	I1124 04:19:24.380985  498056 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/client.key
	I1124 04:19:24.381078  498056 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/apiserver.key.e6db7c28
	I1124 04:19:24.381145  498056 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/proxy-client.key
	I1124 04:19:24.381287  498056 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem (1338 bytes)
	W1124 04:19:24.381346  498056 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389_empty.pem, impossibly tiny 0 bytes
	I1124 04:19:24.381365  498056 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 04:19:24.381412  498056 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem (1082 bytes)
	I1124 04:19:24.381456  498056 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem (1123 bytes)
	I1124 04:19:24.381495  498056 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem (1675 bytes)
	I1124 04:19:24.381565  498056 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:19:24.382191  498056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 04:19:24.405007  498056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 04:19:24.424923  498056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 04:19:24.445155  498056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 04:19:24.465156  498056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 04:19:24.485837  498056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 04:19:24.506531  498056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 04:19:24.535084  498056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/newest-cni-543467/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 04:19:24.564923  498056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem --> /usr/share/ca-certificates/291389.pem (1338 bytes)
	I1124 04:19:24.614201  498056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /usr/share/ca-certificates/2913892.pem (1708 bytes)
	I1124 04:19:24.662561  498056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 04:19:24.704024  498056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 04:19:24.724821  498056 ssh_runner.go:195] Run: openssl version
	I1124 04:19:24.746193  498056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291389.pem && ln -fs /usr/share/ca-certificates/291389.pem /etc/ssl/certs/291389.pem"
	I1124 04:19:24.758706  498056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291389.pem
	I1124 04:19:24.769405  498056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 03:20 /usr/share/ca-certificates/291389.pem
	I1124 04:19:24.769473  498056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291389.pem
	I1124 04:19:24.831129  498056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291389.pem /etc/ssl/certs/51391683.0"
	I1124 04:19:24.845228  498056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2913892.pem && ln -fs /usr/share/ca-certificates/2913892.pem /etc/ssl/certs/2913892.pem"
	I1124 04:19:24.856670  498056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2913892.pem
	I1124 04:19:24.862084  498056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 03:20 /usr/share/ca-certificates/2913892.pem
	I1124 04:19:24.862147  498056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2913892.pem
	I1124 04:19:24.906164  498056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2913892.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 04:19:24.918195  498056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 04:19:24.928395  498056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:19:24.932841  498056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 03:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:19:24.932921  498056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:19:25.000780  498056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 04:19:25.017359  498056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 04:19:25.021842  498056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 04:19:25.064090  498056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 04:19:25.107109  498056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 04:19:25.155801  498056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 04:19:25.214431  498056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 04:19:25.285168  498056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 04:19:25.371712  498056 kubeadm.go:401] StartCluster: {Name:newest-cni-543467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-543467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:19:25.371879  498056 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 04:19:25.371991  498056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 04:19:25.416221  498056 cri.go:89] found id: "5527fb66d2fe4207c38f35db01c1200259d25ccacd80469383a78b9131d8068a"
	I1124 04:19:25.416242  498056 cri.go:89] found id: "c13ec21534f6da44a392b7f2813576f9129b5e5a987bd880e4d67adb534318e3"
	I1124 04:19:25.416247  498056 cri.go:89] found id: "1561bf2881e2d625df271f5d46170b77c2fd6581ce6cf6a9bec4d003e044ec02"
	I1124 04:19:25.416250  498056 cri.go:89] found id: "037326d6daae293670fae3227d4c4b9bd31a79a31a1fbf7812a603c516e804eb"
	I1124 04:19:25.416254  498056 cri.go:89] found id: ""
	I1124 04:19:25.416303  498056 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 04:19:25.436660  498056 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:19:25Z" level=error msg="open /run/runc: no such file or directory"
	I1124 04:19:25.436728  498056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 04:19:25.452297  498056 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 04:19:25.452364  498056 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 04:19:25.452441  498056 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 04:19:25.463741  498056 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 04:19:25.464402  498056 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-543467" does not appear in /home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:19:25.464761  498056 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-289526/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-543467" cluster setting kubeconfig missing "newest-cni-543467" context setting]
	I1124 04:19:25.465332  498056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/kubeconfig: {Name:mkb927d54c1c5489b1562496c31c4a11f46f8c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:25.467082  498056 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 04:19:25.477307  498056 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1124 04:19:25.477392  498056 kubeadm.go:602] duration metric: took 25.008655ms to restartPrimaryControlPlane
	I1124 04:19:25.477417  498056 kubeadm.go:403] duration metric: took 105.716209ms to StartCluster
	I1124 04:19:25.477455  498056 settings.go:142] acquiring lock: {Name:mkbb4fe81234aed150705ba63a85f83232f71688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:25.477551  498056 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:19:25.478608  498056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/kubeconfig: {Name:mkb927d54c1c5489b1562496c31c4a11f46f8c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:25.478885  498056 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 04:19:25.479357  498056 config.go:182] Loaded profile config "newest-cni-543467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:19:25.479439  498056 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 04:19:25.479516  498056 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-543467"
	I1124 04:19:25.479530  498056 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-543467"
	W1124 04:19:25.479536  498056 addons.go:248] addon storage-provisioner should already be in state true
	I1124 04:19:25.479558  498056 host.go:66] Checking if "newest-cni-543467" exists ...
	I1124 04:19:25.480061  498056 cli_runner.go:164] Run: docker container inspect newest-cni-543467 --format={{.State.Status}}
	I1124 04:19:25.480262  498056 addons.go:70] Setting dashboard=true in profile "newest-cni-543467"
	I1124 04:19:25.480305  498056 addons.go:239] Setting addon dashboard=true in "newest-cni-543467"
	W1124 04:19:25.480326  498056 addons.go:248] addon dashboard should already be in state true
	I1124 04:19:25.480366  498056 host.go:66] Checking if "newest-cni-543467" exists ...
	I1124 04:19:25.480835  498056 cli_runner.go:164] Run: docker container inspect newest-cni-543467 --format={{.State.Status}}
	I1124 04:19:25.483747  498056 addons.go:70] Setting default-storageclass=true in profile "newest-cni-543467"
	I1124 04:19:25.483824  498056 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-543467"
	I1124 04:19:25.485294  498056 out.go:179] * Verifying Kubernetes components...
	I1124 04:19:25.485486  498056 cli_runner.go:164] Run: docker container inspect newest-cni-543467 --format={{.State.Status}}
	I1124 04:19:25.488535  498056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:19:25.525752  498056 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 04:19:25.529630  498056 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:19:25.529664  498056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 04:19:25.529739  498056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-543467
	I1124 04:19:25.559754  498056 addons.go:239] Setting addon default-storageclass=true in "newest-cni-543467"
	W1124 04:19:25.559776  498056 addons.go:248] addon default-storageclass should already be in state true
	I1124 04:19:25.559801  498056 host.go:66] Checking if "newest-cni-543467" exists ...
	I1124 04:19:25.560228  498056 cli_runner.go:164] Run: docker container inspect newest-cni-543467 --format={{.State.Status}}
	I1124 04:19:25.565433  498056 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 04:19:25.574609  498056 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 04:19:25.577523  498056 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 04:19:25.577559  498056 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 04:19:25.577636  498056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-543467
	I1124 04:19:25.598541  498056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/newest-cni-543467/id_rsa Username:docker}
	I1124 04:19:25.610753  498056 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 04:19:25.610776  498056 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 04:19:25.610842  498056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-543467
	I1124 04:19:25.644046  498056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/newest-cni-543467/id_rsa Username:docker}
	I1124 04:19:25.650578  498056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/newest-cni-543467/id_rsa Username:docker}
	I1124 04:19:25.836962  498056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:19:25.839190  498056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:19:25.865576  498056 api_server.go:52] waiting for apiserver process to appear ...
	I1124 04:19:25.865696  498056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 04:19:25.882768  498056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 04:19:26.004653  498056 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 04:19:26.004733  498056 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 04:19:26.108557  498056 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 04:19:26.108625  498056 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 04:19:26.144993  498056 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 04:19:26.145079  498056 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 04:19:26.164802  498056 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 04:19:26.164875  498056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 04:19:26.185428  498056 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 04:19:26.185498  498056 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 04:19:26.207211  498056 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 04:19:26.207284  498056 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 04:19:26.224819  498056 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 04:19:26.224901  498056 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 04:19:26.245189  498056 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 04:19:26.245259  498056 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 04:19:26.262149  498056 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 04:19:26.262227  498056 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 04:19:26.283994  498056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 04:19:31.939928  498056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.100664843s)
	I1124 04:19:31.939983  498056 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.074250565s)
	I1124 04:19:31.939996  498056 api_server.go:72] duration metric: took 6.461046354s to wait for apiserver process to appear ...
	I1124 04:19:31.940001  498056 api_server.go:88] waiting for apiserver healthz status ...
	I1124 04:19:31.940018  498056 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 04:19:31.940317  498056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.057480287s)
	I1124 04:19:31.940592  498056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.656512792s)
	I1124 04:19:31.943699  498056 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-543467 addons enable metrics-server
	
	I1124 04:19:31.954585  498056 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1124 04:19:31.968535  498056 api_server.go:141] control plane version: v1.34.1
	I1124 04:19:31.968611  498056 api_server.go:131] duration metric: took 28.603407ms to wait for apiserver health ...
	I1124 04:19:31.968636  498056 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 04:19:31.975745  498056 system_pods.go:59] 8 kube-system pods found
	I1124 04:19:31.975781  498056 system_pods.go:61] "coredns-66bc5c9577-crwzn" [0bcadfb0-f95f-42d3-b0c7-cd5d9056a0d6] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 04:19:31.975791  498056 system_pods.go:61] "etcd-newest-cni-543467" [29381de0-7791-441e-a513-e35979ea0dd7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 04:19:31.975796  498056 system_pods.go:61] "kindnet-pzzgc" [298acecf-f8cf-46d2-bbfd-a73a057da8e8] Running
	I1124 04:19:31.975802  498056 system_pods.go:61] "kube-apiserver-newest-cni-543467" [07759926-5918-4158-84d1-c81b1a145e23] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 04:19:31.975818  498056 system_pods.go:61] "kube-controller-manager-newest-cni-543467" [252887d9-6a65-4755-b572-46d4cc1edca3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 04:19:31.975824  498056 system_pods.go:61] "kube-proxy-m2jcg" [10608e3c-2678-4bf9-9225-5b6421a2204c] Running
	I1124 04:19:31.975829  498056 system_pods.go:61] "kube-scheduler-newest-cni-543467" [2450721f-3e27-4653-8f61-54d10ec8cae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 04:19:31.975833  498056 system_pods.go:61] "storage-provisioner" [8602427f-09dd-41e4-92f2-b6aacf0608e8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 04:19:31.975839  498056 system_pods.go:74] duration metric: took 7.185761ms to wait for pod list to return data ...
	I1124 04:19:31.975847  498056 default_sa.go:34] waiting for default service account to be created ...
	I1124 04:19:31.976116  498056 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1124 04:19:31.978690  498056 default_sa.go:45] found service account: "default"
	I1124 04:19:31.978757  498056 default_sa.go:55] duration metric: took 2.903789ms for default service account to be created ...
	I1124 04:19:31.978787  498056 kubeadm.go:587] duration metric: took 6.499834868s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 04:19:31.978837  498056 node_conditions.go:102] verifying NodePressure condition ...
	I1124 04:19:31.979188  498056 addons.go:530] duration metric: took 6.499751839s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1124 04:19:31.983348  498056 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 04:19:31.983412  498056 node_conditions.go:123] node cpu capacity is 2
	I1124 04:19:31.983439  498056 node_conditions.go:105] duration metric: took 4.578427ms to run NodePressure ...
	I1124 04:19:31.983489  498056 start.go:242] waiting for startup goroutines ...
	I1124 04:19:31.983515  498056 start.go:247] waiting for cluster config update ...
	I1124 04:19:31.983541  498056 start.go:256] writing updated cluster config ...
	I1124 04:19:31.983865  498056 ssh_runner.go:195] Run: rm -f paused
	I1124 04:19:32.064771  498056 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 04:19:32.069902  498056 out.go:179] * Done! kubectl is now configured to use "newest-cni-543467" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.827582138Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.839178143Z" level=info msg="Running pod sandbox: kube-system/kindnet-pzzgc/POD" id=6dcb107d-1f5d-4ab6-b598-1857a1dd0430 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.842796Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.843388426Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=3e64c13d-2190-4ff6-abc8-128d68653655 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.874398959Z" level=info msg="Ran pod sandbox 4e9ab424f147422957507598c0dd60b4a38e23da863a68df338c9ce6a688db2a with infra container: kube-system/kube-proxy-m2jcg/POD" id=3e64c13d-2190-4ff6-abc8-128d68653655 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.87883518Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=6dcb107d-1f5d-4ab6-b598-1857a1dd0430 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.886064856Z" level=info msg="Ran pod sandbox 6a1b75820128498a68ec26efc212657d6295c4a18e3819232497f063c802cd1d with infra container: kube-system/kindnet-pzzgc/POD" id=6dcb107d-1f5d-4ab6-b598-1857a1dd0430 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.900387695Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=2dd84253-bee7-4c63-a59b-7821ff21e11d name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.90493742Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=39182552-bdf3-4576-8ad7-01dbb961bfc0 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.906538736Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=305c3b6f-dbfa-4d83-ba59-98a231ec4d07 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.906806858Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=f3a917a7-0fb2-4b90-a505-73cdf1e23913 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.908328542Z" level=info msg="Creating container: kube-system/kindnet-pzzgc/kindnet-cni" id=fe2bab4a-1b6c-4548-9187-3b66a57f30f8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.908421089Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.914723335Z" level=info msg="Creating container: kube-system/kube-proxy-m2jcg/kube-proxy" id=6282db34-892d-4256-93e9-ca4e8cb77196 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.914837971Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.928173295Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.928681207Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.94048802Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.94107645Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:19:31 newest-cni-543467 crio[615]: time="2025-11-24T04:19:31.094910621Z" level=info msg="Created container fd99f976eca993b9b60a234ee0fa68afe7273c5df82607dfca654e378d0f1921: kube-system/kindnet-pzzgc/kindnet-cni" id=fe2bab4a-1b6c-4548-9187-3b66a57f30f8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:19:31 newest-cni-543467 crio[615]: time="2025-11-24T04:19:31.09594864Z" level=info msg="Starting container: fd99f976eca993b9b60a234ee0fa68afe7273c5df82607dfca654e378d0f1921" id=0f2d3777-adc3-4bef-916a-cc43d531a35d name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:19:31 newest-cni-543467 crio[615]: time="2025-11-24T04:19:31.104458355Z" level=info msg="Started container" PID=1072 containerID=fd99f976eca993b9b60a234ee0fa68afe7273c5df82607dfca654e378d0f1921 description=kube-system/kindnet-pzzgc/kindnet-cni id=0f2d3777-adc3-4bef-916a-cc43d531a35d name=/runtime.v1.RuntimeService/StartContainer sandboxID=6a1b75820128498a68ec26efc212657d6295c4a18e3819232497f063c802cd1d
	Nov 24 04:19:31 newest-cni-543467 crio[615]: time="2025-11-24T04:19:31.162792073Z" level=info msg="Created container 201b2c25c22abf0377b85a36b8eb31fe80999b02511bf72b46ea931c57632c71: kube-system/kube-proxy-m2jcg/kube-proxy" id=6282db34-892d-4256-93e9-ca4e8cb77196 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:19:31 newest-cni-543467 crio[615]: time="2025-11-24T04:19:31.171247938Z" level=info msg="Starting container: 201b2c25c22abf0377b85a36b8eb31fe80999b02511bf72b46ea931c57632c71" id=b17b6678-6c60-4d6b-ba41-022314cd39f7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:19:31 newest-cni-543467 crio[615]: time="2025-11-24T04:19:31.179503791Z" level=info msg="Started container" PID=1069 containerID=201b2c25c22abf0377b85a36b8eb31fe80999b02511bf72b46ea931c57632c71 description=kube-system/kube-proxy-m2jcg/kube-proxy id=b17b6678-6c60-4d6b-ba41-022314cd39f7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4e9ab424f147422957507598c0dd60b4a38e23da863a68df338c9ce6a688db2a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	fd99f976eca99       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   4 seconds ago       Running             kindnet-cni               1                   6a1b758201284       kindnet-pzzgc                               kube-system
	201b2c25c22ab       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   4 seconds ago       Running             kube-proxy                1                   4e9ab424f1474       kube-proxy-m2jcg                            kube-system
	5527fb66d2fe4       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   10 seconds ago      Running             kube-controller-manager   1                   2c6ed9de16144       kube-controller-manager-newest-cni-543467   kube-system
	c13ec21534f6d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   10 seconds ago      Running             kube-scheduler            1                   71dd0f3a8447e       kube-scheduler-newest-cni-543467            kube-system
	1561bf2881e2d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   10 seconds ago      Running             etcd                      1                   0782608edfed8       etcd-newest-cni-543467                      kube-system
	037326d6daae2       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   10 seconds ago      Running             kube-apiserver            1                   419e2a8b3840f       kube-apiserver-newest-cni-543467            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-543467
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-543467
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=newest-cni-543467
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T04_19_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 04:19:02 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-543467
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 04:19:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 04:19:30 +0000   Mon, 24 Nov 2025 04:18:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 04:19:30 +0000   Mon, 24 Nov 2025 04:18:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 04:19:30 +0000   Mon, 24 Nov 2025 04:18:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 24 Nov 2025 04:19:30 +0000   Mon, 24 Nov 2025 04:18:57 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-543467
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 304a86241bf1bbb85bd31db5692386d7
	  System UUID:                9187578d-6ec8-41b4-a303-b1b23fbde790
	  Boot ID:                    e6ca431c-3a35-478f-87f6-f49cc4bc8a65
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-543467                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         31s
	  kube-system                 kindnet-pzzgc                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-newest-cni-543467             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-543467    200m (10%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-m2jcg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-newest-cni-543467             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 24s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  40s (x8 over 40s)  kubelet          Node newest-cni-543467 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 40s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 40s                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    40s (x8 over 40s)  kubelet          Node newest-cni-543467 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     40s (x8 over 40s)  kubelet          Node newest-cni-543467 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     31s                kubelet          Node newest-cni-543467 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 31s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  31s                kubelet          Node newest-cni-543467 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    31s                kubelet          Node newest-cni-543467 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 31s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           27s                node-controller  Node newest-cni-543467 event: Registered Node newest-cni-543467 in Controller
	  Normal   Starting                 12s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12s (x8 over 12s)  kubelet          Node newest-cni-543467 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12s (x8 over 12s)  kubelet          Node newest-cni-543467 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12s (x8 over 12s)  kubelet          Node newest-cni-543467 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-543467 event: Registered Node newest-cni-543467 in Controller
	
	
	==> dmesg <==
	[Nov24 03:57] overlayfs: idmapped layers are currently not supported
	[  +3.077077] overlayfs: idmapped layers are currently not supported
	[Nov24 03:58] overlayfs: idmapped layers are currently not supported
	[ +17.825126] overlayfs: idmapped layers are currently not supported
	[Nov24 03:59] overlayfs: idmapped layers are currently not supported
	[ +28.164324] overlayfs: idmapped layers are currently not supported
	[Nov24 04:01] overlayfs: idmapped layers are currently not supported
	[Nov24 04:02] overlayfs: idmapped layers are currently not supported
	[Nov24 04:04] overlayfs: idmapped layers are currently not supported
	[Nov24 04:05] overlayfs: idmapped layers are currently not supported
	[Nov24 04:06] overlayfs: idmapped layers are currently not supported
	[Nov24 04:08] overlayfs: idmapped layers are currently not supported
	[Nov24 04:10] overlayfs: idmapped layers are currently not supported
	[Nov24 04:11] overlayfs: idmapped layers are currently not supported
	[ +23.918932] overlayfs: idmapped layers are currently not supported
	[Nov24 04:12] overlayfs: idmapped layers are currently not supported
	[ +35.202347] overlayfs: idmapped layers are currently not supported
	[Nov24 04:13] overlayfs: idmapped layers are currently not supported
	[Nov24 04:15] overlayfs: idmapped layers are currently not supported
	[ +47.476343] overlayfs: idmapped layers are currently not supported
	[Nov24 04:16] overlayfs: idmapped layers are currently not supported
	[Nov24 04:17] overlayfs: idmapped layers are currently not supported
	[Nov24 04:18] overlayfs: idmapped layers are currently not supported
	[ +43.060353] overlayfs: idmapped layers are currently not supported
	[Nov24 04:19] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1561bf2881e2d625df271f5d46170b77c2fd6581ce6cf6a9bec4d003e044ec02] <==
	{"level":"warn","ts":"2025-11-24T04:19:28.915547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:28.939015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:28.953416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:28.992576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:28.998245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.015188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.065400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.079629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.103391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.146878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.158143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.179287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.195520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.207709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.227707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.240452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.259004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.274517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.316414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.349514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.373542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.402682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.407696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.461964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52714","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T04:19:31.095949Z","caller":"traceutil/trace.go:172","msg":"trace[1682586602] transaction","detail":"{read_only:false; response_revision:452; number_of_response:1; }","duration":"117.072488ms","start":"2025-11-24T04:19:30.978858Z","end":"2025-11-24T04:19:31.095931Z","steps":["trace[1682586602] 'process raft request'  (duration: 115.545454ms)"],"step_count":1}
	
	
	==> kernel <==
	 04:19:36 up  3:01,  0 user,  load average: 2.99, 3.28, 2.87
	Linux newest-cni-543467 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fd99f976eca993b9b60a234ee0fa68afe7273c5df82607dfca654e378d0f1921] <==
	I1124 04:19:31.216408       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 04:19:31.216770       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 04:19:31.216883       1 main.go:148] setting mtu 1500 for CNI 
	I1124 04:19:31.216894       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 04:19:31.216908       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T04:19:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 04:19:31.416455       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 04:19:31.416481       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 04:19:31.416490       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 04:19:31.417223       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [037326d6daae293670fae3227d4c4b9bd31a79a31a1fbf7812a603c516e804eb] <==
	I1124 04:19:30.414993       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 04:19:30.420450       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1124 04:19:30.420491       1 policy_source.go:240] refreshing policies
	I1124 04:19:30.420656       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 04:19:30.440114       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 04:19:30.455598       1 cache.go:39] Caches are synced for autoregister controller
	I1124 04:19:30.486504       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 04:19:30.487265       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 04:19:30.495104       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 04:19:30.495952       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 04:19:30.501820       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 04:19:30.501843       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 04:19:30.502342       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	E1124 04:19:30.579401       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 04:19:30.643549       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 04:19:31.280873       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 04:19:31.429690       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 04:19:31.528650       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 04:19:31.608538       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 04:19:31.642653       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 04:19:31.770834       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.52.147"}
	I1124 04:19:31.810342       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.101.66"}
	I1124 04:19:33.913996       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 04:19:34.241881       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 04:19:34.344034       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [5527fb66d2fe4207c38f35db01c1200259d25ccacd80469383a78b9131d8068a] <==
	I1124 04:19:33.836965       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 04:19:33.836976       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 04:19:33.836996       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 04:19:33.837490       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 04:19:33.837520       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 04:19:33.837529       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 04:19:33.837536       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 04:19:33.837545       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 04:19:33.836931       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 04:19:33.842647       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 04:19:33.845671       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 04:19:33.846939       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 04:19:33.853500       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 04:19:33.860842       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 04:19:33.864543       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 04:19:33.878851       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 04:19:33.888688       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 04:19:33.888965       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 04:19:33.889271       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 04:19:33.889330       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 04:19:33.889417       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 04:19:33.890343       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 04:19:33.893186       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 04:19:33.895761       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 04:19:33.910630       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	
	
	==> kube-proxy [201b2c25c22abf0377b85a36b8eb31fe80999b02511bf72b46ea931c57632c71] <==
	I1124 04:19:31.495770       1 server_linux.go:53] "Using iptables proxy"
	I1124 04:19:31.608744       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 04:19:31.709040       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 04:19:31.709547       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 04:19:31.709671       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 04:19:31.779913       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 04:19:31.779974       1 server_linux.go:132] "Using iptables Proxier"
	I1124 04:19:31.792731       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 04:19:31.793122       1 server.go:527] "Version info" version="v1.34.1"
	I1124 04:19:31.793334       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:19:31.799097       1 config.go:200] "Starting service config controller"
	I1124 04:19:31.810567       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 04:19:31.810633       1 config.go:106] "Starting endpoint slice config controller"
	I1124 04:19:31.810646       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 04:19:31.810662       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 04:19:31.810674       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 04:19:31.811323       1 config.go:309] "Starting node config controller"
	I1124 04:19:31.811345       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 04:19:31.811355       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 04:19:31.913313       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 04:19:31.913387       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 04:19:31.913428       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c13ec21534f6da44a392b7f2813576f9129b5e5a987bd880e4d67adb534318e3] <==
	I1124 04:19:28.507425       1 serving.go:386] Generated self-signed cert in-memory
	I1124 04:19:31.187810       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 04:19:31.191512       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:19:31.200388       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 04:19:31.200505       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 04:19:31.200523       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 04:19:31.200545       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 04:19:31.203009       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 04:19:31.203021       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 04:19:31.203040       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 04:19:31.203047       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 04:19:31.302236       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1124 04:19:31.305067       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 04:19:31.305168       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: E1124 04:19:30.275152     736 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-543467\" not found" node="newest-cni-543467"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.443687     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-543467"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: E1124 04:19:30.484236     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-543467\" already exists" pod="kube-system/etcd-newest-cni-543467"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.484273     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-543467"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.501513     736 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-543467"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.501630     736 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-543467"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.501659     736 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: E1124 04:19:30.506806     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-543467\" already exists" pod="kube-system/kube-apiserver-newest-cni-543467"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.506830     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-543467"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.507499     736 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.509479     736 apiserver.go:52] "Watching apiserver"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.529187     736 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: E1124 04:19:30.537891     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-543467\" already exists" pod="kube-system/kube-controller-manager-newest-cni-543467"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.537934     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-543467"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: E1124 04:19:30.584506     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-543467\" already exists" pod="kube-system/kube-scheduler-newest-cni-543467"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.608015     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/298acecf-f8cf-46d2-bbfd-a73a057da8e8-cni-cfg\") pod \"kindnet-pzzgc\" (UID: \"298acecf-f8cf-46d2-bbfd-a73a057da8e8\") " pod="kube-system/kindnet-pzzgc"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.608165     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/298acecf-f8cf-46d2-bbfd-a73a057da8e8-xtables-lock\") pod \"kindnet-pzzgc\" (UID: \"298acecf-f8cf-46d2-bbfd-a73a057da8e8\") " pod="kube-system/kindnet-pzzgc"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.608298     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10608e3c-2678-4bf9-9225-5b6421a2204c-xtables-lock\") pod \"kube-proxy-m2jcg\" (UID: \"10608e3c-2678-4bf9-9225-5b6421a2204c\") " pod="kube-system/kube-proxy-m2jcg"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.608323     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/298acecf-f8cf-46d2-bbfd-a73a057da8e8-lib-modules\") pod \"kindnet-pzzgc\" (UID: \"298acecf-f8cf-46d2-bbfd-a73a057da8e8\") " pod="kube-system/kindnet-pzzgc"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.608466     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10608e3c-2678-4bf9-9225-5b6421a2204c-lib-modules\") pod \"kube-proxy-m2jcg\" (UID: \"10608e3c-2678-4bf9-9225-5b6421a2204c\") " pod="kube-system/kube-proxy-m2jcg"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.688523     736 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: W1124 04:19:30.873661     736 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d5de64ccb4ee82439c3eb50ca5c7d34e9cf9c4369188379e453a3607df8d63aa/crio-4e9ab424f147422957507598c0dd60b4a38e23da863a68df338c9ce6a688db2a WatchSource:0}: Error finding container 4e9ab424f147422957507598c0dd60b4a38e23da863a68df338c9ce6a688db2a: Status 404 returned error can't find the container with id 4e9ab424f147422957507598c0dd60b4a38e23da863a68df338c9ce6a688db2a
	Nov 24 04:19:33 newest-cni-543467 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 04:19:33 newest-cni-543467 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 04:19:33 newest-cni-543467 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-543467 -n newest-cni-543467
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-543467 -n newest-cni-543467: exit status 2 (544.236214ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-543467 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-crwzn storage-provisioner dashboard-metrics-scraper-6ffb444bf9-mvxg9 kubernetes-dashboard-855c9754f9-sxpqd
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-543467 describe pod coredns-66bc5c9577-crwzn storage-provisioner dashboard-metrics-scraper-6ffb444bf9-mvxg9 kubernetes-dashboard-855c9754f9-sxpqd
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-543467 describe pod coredns-66bc5c9577-crwzn storage-provisioner dashboard-metrics-scraper-6ffb444bf9-mvxg9 kubernetes-dashboard-855c9754f9-sxpqd: exit status 1 (95.394374ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-crwzn" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-mvxg9" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-sxpqd" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-543467 describe pod coredns-66bc5c9577-crwzn storage-provisioner dashboard-metrics-scraper-6ffb444bf9-mvxg9 kubernetes-dashboard-855c9754f9-sxpqd: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-543467
helpers_test.go:243: (dbg) docker inspect newest-cni-543467:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d5de64ccb4ee82439c3eb50ca5c7d34e9cf9c4369188379e453a3607df8d63aa",
	        "Created": "2025-11-24T04:18:39.041842209Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 498186,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T04:19:17.085720498Z",
	            "FinishedAt": "2025-11-24T04:19:15.994152307Z"
	        },
	        "Image": "sha256:fbb44bc62521f331457dff002aaa5e1e27856f9e53853b3b3ee62969be454028",
	        "ResolvConfPath": "/var/lib/docker/containers/d5de64ccb4ee82439c3eb50ca5c7d34e9cf9c4369188379e453a3607df8d63aa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d5de64ccb4ee82439c3eb50ca5c7d34e9cf9c4369188379e453a3607df8d63aa/hostname",
	        "HostsPath": "/var/lib/docker/containers/d5de64ccb4ee82439c3eb50ca5c7d34e9cf9c4369188379e453a3607df8d63aa/hosts",
	        "LogPath": "/var/lib/docker/containers/d5de64ccb4ee82439c3eb50ca5c7d34e9cf9c4369188379e453a3607df8d63aa/d5de64ccb4ee82439c3eb50ca5c7d34e9cf9c4369188379e453a3607df8d63aa-json.log",
	        "Name": "/newest-cni-543467",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-543467:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-543467",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d5de64ccb4ee82439c3eb50ca5c7d34e9cf9c4369188379e453a3607df8d63aa",
	                "LowerDir": "/var/lib/docker/overlay2/508f75bd78cd9ee664b18d9c770c9f2ff20973534449594a6f1b58570079d85b-init/diff:/var/lib/docker/overlay2/d0aa28be488ed1454a92ae6ce1d851cbc3e880fd8a322d63788b1835a346ec13/diff",
	                "MergedDir": "/var/lib/docker/overlay2/508f75bd78cd9ee664b18d9c770c9f2ff20973534449594a6f1b58570079d85b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/508f75bd78cd9ee664b18d9c770c9f2ff20973534449594a6f1b58570079d85b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/508f75bd78cd9ee664b18d9c770c9f2ff20973534449594a6f1b58570079d85b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-543467",
	                "Source": "/var/lib/docker/volumes/newest-cni-543467/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-543467",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-543467",
	                "name.minikube.sigs.k8s.io": "newest-cni-543467",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "917d9c363511e57bd2672f8f916e3a1de05f4de502978198f50390cdeb61816f",
	            "SandboxKey": "/var/run/docker/netns/917d9c363511",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-543467": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:e5:e1:93:dd:99",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fbc2fa8442ac0221bba9fd37174f7543e2a4c35cf01fdb513ae8d608db3a956a",
	                    "EndpointID": "ac6b4db944a3c498d87e0e4cb9c902a6de1931db90bec8305fa6cdccec6a7879",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-543467",
	                        "d5de64ccb4ee"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-543467 -n newest-cni-543467
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-543467 -n newest-cni-543467: exit status 2 (346.57288ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-543467 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-543467 logs -n 25: (1.058926131s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-520529 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │                     │
	│ stop    │ -p embed-certs-520529 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-520529 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ start   │ -p embed-certs-520529 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:18 UTC │
	│ image   │ no-preload-600301 image list --format=json                                                                                                                                                                                                    │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ pause   │ -p no-preload-600301 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │                     │
	│ delete  │ -p no-preload-600301                                                                                                                                                                                                                          │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ delete  │ -p no-preload-600301                                                                                                                                                                                                                          │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ delete  │ -p disable-driver-mounts-995056                                                                                                                                                                                                               │ disable-driver-mounts-995056 │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ start   │ -p default-k8s-diff-port-303179 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-303179 │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:19 UTC │
	│ image   │ embed-certs-520529 image list --format=json                                                                                                                                                                                                   │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │ 24 Nov 25 04:18 UTC │
	│ pause   │ -p embed-certs-520529 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │                     │
	│ delete  │ -p embed-certs-520529                                                                                                                                                                                                                         │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │ 24 Nov 25 04:18 UTC │
	│ delete  │ -p embed-certs-520529                                                                                                                                                                                                                         │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │ 24 Nov 25 04:18 UTC │
	│ start   │ -p newest-cni-543467 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │ 24 Nov 25 04:19 UTC │
	│ addons  │ enable metrics-server -p newest-cni-543467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │                     │
	│ stop    │ -p newest-cni-543467 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:19 UTC │
	│ addons  │ enable dashboard -p newest-cni-543467 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:19 UTC │
	│ start   │ -p newest-cni-543467 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:19 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-303179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-303179 │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-303179 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-303179 │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:19 UTC │
	│ image   │ newest-cni-543467 image list --format=json                                                                                                                                                                                                    │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:19 UTC │
	│ pause   │ -p newest-cni-543467 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-303179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-303179 │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:19 UTC │
	│ start   │ -p default-k8s-diff-port-303179 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-303179 │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 04:19:35
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 04:19:35.733626  500996 out.go:360] Setting OutFile to fd 1 ...
	I1124 04:19:35.733858  500996 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:19:35.733889  500996 out.go:374] Setting ErrFile to fd 2...
	I1124 04:19:35.733909  500996 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:19:35.734205  500996 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 04:19:35.734635  500996 out.go:368] Setting JSON to false
	I1124 04:19:35.735648  500996 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10905,"bootTime":1763947071,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 04:19:35.735748  500996 start.go:143] virtualization:  
	I1124 04:19:35.740846  500996 out.go:179] * [default-k8s-diff-port-303179] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 04:19:35.743929  500996 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 04:19:35.744013  500996 notify.go:221] Checking for updates...
	I1124 04:19:35.757602  500996 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 04:19:35.760583  500996 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:19:35.763529  500996 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	I1124 04:19:35.766431  500996 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 04:19:35.769590  500996 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 04:19:35.773121  500996 config.go:182] Loaded profile config "default-k8s-diff-port-303179": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:19:35.773869  500996 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 04:19:35.816262  500996 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 04:19:35.816425  500996 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:19:35.928846  500996 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 04:19:35.918318075 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:19:35.928975  500996 docker.go:319] overlay module found
	I1124 04:19:35.932027  500996 out.go:179] * Using the docker driver based on existing profile
	I1124 04:19:35.934870  500996 start.go:309] selected driver: docker
	I1124 04:19:35.934891  500996 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-303179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303179 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:19:35.934989  500996 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 04:19:35.935750  500996 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:19:36.024086  500996 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 04:19:36.013084068 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:19:36.024440  500996 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 04:19:36.024472  500996 cni.go:84] Creating CNI manager for ""
	I1124 04:19:36.024535  500996 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:19:36.024591  500996 start.go:353] cluster config:
	{Name:default-k8s-diff-port-303179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:19:36.027703  500996 out.go:179] * Starting "default-k8s-diff-port-303179" primary control-plane node in "default-k8s-diff-port-303179" cluster
	I1124 04:19:36.030712  500996 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 04:19:36.034796  500996 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 04:19:36.038532  500996 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:19:36.038587  500996 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 04:19:36.038598  500996 cache.go:65] Caching tarball of preloaded images
	I1124 04:19:36.038681  500996 preload.go:238] Found /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 04:19:36.038696  500996 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 04:19:36.038818  500996 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/config.json ...
	I1124 04:19:36.039046  500996 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 04:19:36.063211  500996 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 04:19:36.063236  500996 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 04:19:36.063252  500996 cache.go:243] Successfully downloaded all kic artifacts
	I1124 04:19:36.063287  500996 start.go:360] acquireMachinesLock for default-k8s-diff-port-303179: {Name:mk876fcea2f12d71199d194b5970210275c2b905 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 04:19:36.063345  500996 start.go:364] duration metric: took 36.136µs to acquireMachinesLock for "default-k8s-diff-port-303179"
	I1124 04:19:36.063372  500996 start.go:96] Skipping create...Using existing machine configuration
	I1124 04:19:36.063378  500996 fix.go:54] fixHost starting: 
	I1124 04:19:36.063653  500996 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303179 --format={{.State.Status}}
	I1124 04:19:36.092434  500996 fix.go:112] recreateIfNeeded on default-k8s-diff-port-303179: state=Stopped err=<nil>
	W1124 04:19:36.092469  500996 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.827582138Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.839178143Z" level=info msg="Running pod sandbox: kube-system/kindnet-pzzgc/POD" id=6dcb107d-1f5d-4ab6-b598-1857a1dd0430 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.842796Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.843388426Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=3e64c13d-2190-4ff6-abc8-128d68653655 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.874398959Z" level=info msg="Ran pod sandbox 4e9ab424f147422957507598c0dd60b4a38e23da863a68df338c9ce6a688db2a with infra container: kube-system/kube-proxy-m2jcg/POD" id=3e64c13d-2190-4ff6-abc8-128d68653655 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.87883518Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=6dcb107d-1f5d-4ab6-b598-1857a1dd0430 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.886064856Z" level=info msg="Ran pod sandbox 6a1b75820128498a68ec26efc212657d6295c4a18e3819232497f063c802cd1d with infra container: kube-system/kindnet-pzzgc/POD" id=6dcb107d-1f5d-4ab6-b598-1857a1dd0430 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.900387695Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=2dd84253-bee7-4c63-a59b-7821ff21e11d name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.90493742Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=39182552-bdf3-4576-8ad7-01dbb961bfc0 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.906538736Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=305c3b6f-dbfa-4d83-ba59-98a231ec4d07 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.906806858Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=f3a917a7-0fb2-4b90-a505-73cdf1e23913 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.908328542Z" level=info msg="Creating container: kube-system/kindnet-pzzgc/kindnet-cni" id=fe2bab4a-1b6c-4548-9187-3b66a57f30f8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.908421089Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.914723335Z" level=info msg="Creating container: kube-system/kube-proxy-m2jcg/kube-proxy" id=6282db34-892d-4256-93e9-ca4e8cb77196 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.914837971Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.928173295Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.928681207Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.94048802Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:19:30 newest-cni-543467 crio[615]: time="2025-11-24T04:19:30.94107645Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:19:31 newest-cni-543467 crio[615]: time="2025-11-24T04:19:31.094910621Z" level=info msg="Created container fd99f976eca993b9b60a234ee0fa68afe7273c5df82607dfca654e378d0f1921: kube-system/kindnet-pzzgc/kindnet-cni" id=fe2bab4a-1b6c-4548-9187-3b66a57f30f8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:19:31 newest-cni-543467 crio[615]: time="2025-11-24T04:19:31.09594864Z" level=info msg="Starting container: fd99f976eca993b9b60a234ee0fa68afe7273c5df82607dfca654e378d0f1921" id=0f2d3777-adc3-4bef-916a-cc43d531a35d name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:19:31 newest-cni-543467 crio[615]: time="2025-11-24T04:19:31.104458355Z" level=info msg="Started container" PID=1072 containerID=fd99f976eca993b9b60a234ee0fa68afe7273c5df82607dfca654e378d0f1921 description=kube-system/kindnet-pzzgc/kindnet-cni id=0f2d3777-adc3-4bef-916a-cc43d531a35d name=/runtime.v1.RuntimeService/StartContainer sandboxID=6a1b75820128498a68ec26efc212657d6295c4a18e3819232497f063c802cd1d
	Nov 24 04:19:31 newest-cni-543467 crio[615]: time="2025-11-24T04:19:31.162792073Z" level=info msg="Created container 201b2c25c22abf0377b85a36b8eb31fe80999b02511bf72b46ea931c57632c71: kube-system/kube-proxy-m2jcg/kube-proxy" id=6282db34-892d-4256-93e9-ca4e8cb77196 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:19:31 newest-cni-543467 crio[615]: time="2025-11-24T04:19:31.171247938Z" level=info msg="Starting container: 201b2c25c22abf0377b85a36b8eb31fe80999b02511bf72b46ea931c57632c71" id=b17b6678-6c60-4d6b-ba41-022314cd39f7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:19:31 newest-cni-543467 crio[615]: time="2025-11-24T04:19:31.179503791Z" level=info msg="Started container" PID=1069 containerID=201b2c25c22abf0377b85a36b8eb31fe80999b02511bf72b46ea931c57632c71 description=kube-system/kube-proxy-m2jcg/kube-proxy id=b17b6678-6c60-4d6b-ba41-022314cd39f7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4e9ab424f147422957507598c0dd60b4a38e23da863a68df338c9ce6a688db2a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	fd99f976eca99       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 seconds ago       Running             kindnet-cni               1                   6a1b758201284       kindnet-pzzgc                               kube-system
	201b2c25c22ab       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 seconds ago       Running             kube-proxy                1                   4e9ab424f1474       kube-proxy-m2jcg                            kube-system
	5527fb66d2fe4       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   13 seconds ago      Running             kube-controller-manager   1                   2c6ed9de16144       kube-controller-manager-newest-cni-543467   kube-system
	c13ec21534f6d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   13 seconds ago      Running             kube-scheduler            1                   71dd0f3a8447e       kube-scheduler-newest-cni-543467            kube-system
	1561bf2881e2d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   13 seconds ago      Running             etcd                      1                   0782608edfed8       etcd-newest-cni-543467                      kube-system
	037326d6daae2       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   13 seconds ago      Running             kube-apiserver            1                   419e2a8b3840f       kube-apiserver-newest-cni-543467            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-543467
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-543467
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=newest-cni-543467
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T04_19_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 04:19:02 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-543467
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 04:19:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 04:19:30 +0000   Mon, 24 Nov 2025 04:18:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 04:19:30 +0000   Mon, 24 Nov 2025 04:18:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 04:19:30 +0000   Mon, 24 Nov 2025 04:18:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 24 Nov 2025 04:19:30 +0000   Mon, 24 Nov 2025 04:18:57 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-543467
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 304a86241bf1bbb85bd31db5692386d7
	  System UUID:                9187578d-6ec8-41b4-a303-b1b23fbde790
	  Boot ID:                    e6ca431c-3a35-478f-87f6-f49cc4bc8a65
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-543467                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-pzzgc                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-newest-cni-543467             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-newest-cni-543467    200m (10%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-m2jcg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-newest-cni-543467             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 26s                kube-proxy       
	  Normal   Starting                 6s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  42s (x8 over 42s)  kubelet          Node newest-cni-543467 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 42s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 42s                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    42s (x8 over 42s)  kubelet          Node newest-cni-543467 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     42s (x8 over 42s)  kubelet          Node newest-cni-543467 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     33s                kubelet          Node newest-cni-543467 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 33s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  33s                kubelet          Node newest-cni-543467 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    33s                kubelet          Node newest-cni-543467 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 33s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           29s                node-controller  Node newest-cni-543467 event: Registered Node newest-cni-543467 in Controller
	  Normal   Starting                 14s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 14s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14s (x8 over 14s)  kubelet          Node newest-cni-543467 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14s (x8 over 14s)  kubelet          Node newest-cni-543467 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14s (x8 over 14s)  kubelet          Node newest-cni-543467 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-543467 event: Registered Node newest-cni-543467 in Controller
	
	
	==> dmesg <==
	[Nov24 03:57] overlayfs: idmapped layers are currently not supported
	[  +3.077077] overlayfs: idmapped layers are currently not supported
	[Nov24 03:58] overlayfs: idmapped layers are currently not supported
	[ +17.825126] overlayfs: idmapped layers are currently not supported
	[Nov24 03:59] overlayfs: idmapped layers are currently not supported
	[ +28.164324] overlayfs: idmapped layers are currently not supported
	[Nov24 04:01] overlayfs: idmapped layers are currently not supported
	[Nov24 04:02] overlayfs: idmapped layers are currently not supported
	[Nov24 04:04] overlayfs: idmapped layers are currently not supported
	[Nov24 04:05] overlayfs: idmapped layers are currently not supported
	[Nov24 04:06] overlayfs: idmapped layers are currently not supported
	[Nov24 04:08] overlayfs: idmapped layers are currently not supported
	[Nov24 04:10] overlayfs: idmapped layers are currently not supported
	[Nov24 04:11] overlayfs: idmapped layers are currently not supported
	[ +23.918932] overlayfs: idmapped layers are currently not supported
	[Nov24 04:12] overlayfs: idmapped layers are currently not supported
	[ +35.202347] overlayfs: idmapped layers are currently not supported
	[Nov24 04:13] overlayfs: idmapped layers are currently not supported
	[Nov24 04:15] overlayfs: idmapped layers are currently not supported
	[ +47.476343] overlayfs: idmapped layers are currently not supported
	[Nov24 04:16] overlayfs: idmapped layers are currently not supported
	[Nov24 04:17] overlayfs: idmapped layers are currently not supported
	[Nov24 04:18] overlayfs: idmapped layers are currently not supported
	[ +43.060353] overlayfs: idmapped layers are currently not supported
	[Nov24 04:19] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1561bf2881e2d625df271f5d46170b77c2fd6581ce6cf6a9bec4d003e044ec02] <==
	{"level":"warn","ts":"2025-11-24T04:19:28.915547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:28.939015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:28.953416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:28.992576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:28.998245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.015188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.065400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.079629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.103391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.146878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.158143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.179287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.195520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.207709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.227707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.240452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.259004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.274517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.316414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.349514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.373542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.402682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.407696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:29.461964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52714","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T04:19:31.095949Z","caller":"traceutil/trace.go:172","msg":"trace[1682586602] transaction","detail":"{read_only:false; response_revision:452; number_of_response:1; }","duration":"117.072488ms","start":"2025-11-24T04:19:30.978858Z","end":"2025-11-24T04:19:31.095931Z","steps":["trace[1682586602] 'process raft request'  (duration: 115.545454ms)"],"step_count":1}
	
	
	==> kernel <==
	 04:19:38 up  3:01,  0 user,  load average: 2.99, 3.28, 2.87
	Linux newest-cni-543467 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fd99f976eca993b9b60a234ee0fa68afe7273c5df82607dfca654e378d0f1921] <==
	I1124 04:19:31.216408       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 04:19:31.216770       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 04:19:31.216883       1 main.go:148] setting mtu 1500 for CNI 
	I1124 04:19:31.216894       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 04:19:31.216908       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T04:19:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 04:19:31.416455       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 04:19:31.416481       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 04:19:31.416490       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 04:19:31.417223       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [037326d6daae293670fae3227d4c4b9bd31a79a31a1fbf7812a603c516e804eb] <==
	I1124 04:19:30.414993       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 04:19:30.420450       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1124 04:19:30.420491       1 policy_source.go:240] refreshing policies
	I1124 04:19:30.420656       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 04:19:30.440114       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 04:19:30.455598       1 cache.go:39] Caches are synced for autoregister controller
	I1124 04:19:30.486504       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 04:19:30.487265       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 04:19:30.495104       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 04:19:30.495952       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 04:19:30.501820       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 04:19:30.501843       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 04:19:30.502342       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	E1124 04:19:30.579401       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 04:19:30.643549       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 04:19:31.280873       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 04:19:31.429690       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 04:19:31.528650       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 04:19:31.608538       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 04:19:31.642653       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 04:19:31.770834       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.52.147"}
	I1124 04:19:31.810342       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.101.66"}
	I1124 04:19:33.913996       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 04:19:34.241881       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 04:19:34.344034       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [5527fb66d2fe4207c38f35db01c1200259d25ccacd80469383a78b9131d8068a] <==
	I1124 04:19:33.836965       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 04:19:33.836976       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 04:19:33.836996       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 04:19:33.837490       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 04:19:33.837520       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 04:19:33.837529       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 04:19:33.837536       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 04:19:33.837545       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 04:19:33.836931       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 04:19:33.842647       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 04:19:33.845671       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 04:19:33.846939       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 04:19:33.853500       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 04:19:33.860842       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 04:19:33.864543       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 04:19:33.878851       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 04:19:33.888688       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 04:19:33.888965       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 04:19:33.889271       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 04:19:33.889330       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 04:19:33.889417       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 04:19:33.890343       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 04:19:33.893186       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 04:19:33.895761       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 04:19:33.910630       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	
	
	==> kube-proxy [201b2c25c22abf0377b85a36b8eb31fe80999b02511bf72b46ea931c57632c71] <==
	I1124 04:19:31.495770       1 server_linux.go:53] "Using iptables proxy"
	I1124 04:19:31.608744       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 04:19:31.709040       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 04:19:31.709547       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 04:19:31.709671       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 04:19:31.779913       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 04:19:31.779974       1 server_linux.go:132] "Using iptables Proxier"
	I1124 04:19:31.792731       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 04:19:31.793122       1 server.go:527] "Version info" version="v1.34.1"
	I1124 04:19:31.793334       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:19:31.799097       1 config.go:200] "Starting service config controller"
	I1124 04:19:31.810567       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 04:19:31.810633       1 config.go:106] "Starting endpoint slice config controller"
	I1124 04:19:31.810646       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 04:19:31.810662       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 04:19:31.810674       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 04:19:31.811323       1 config.go:309] "Starting node config controller"
	I1124 04:19:31.811345       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 04:19:31.811355       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 04:19:31.913313       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 04:19:31.913387       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 04:19:31.913428       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c13ec21534f6da44a392b7f2813576f9129b5e5a987bd880e4d67adb534318e3] <==
	I1124 04:19:28.507425       1 serving.go:386] Generated self-signed cert in-memory
	I1124 04:19:31.187810       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 04:19:31.191512       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:19:31.200388       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 04:19:31.200505       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 04:19:31.200523       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 04:19:31.200545       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 04:19:31.203009       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 04:19:31.203021       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 04:19:31.203040       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 04:19:31.203047       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 04:19:31.302236       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1124 04:19:31.305067       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 04:19:31.305168       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: E1124 04:19:30.275152     736 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-543467\" not found" node="newest-cni-543467"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.443687     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-543467"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: E1124 04:19:30.484236     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-543467\" already exists" pod="kube-system/etcd-newest-cni-543467"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.484273     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-543467"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.501513     736 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-543467"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.501630     736 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-543467"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.501659     736 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: E1124 04:19:30.506806     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-543467\" already exists" pod="kube-system/kube-apiserver-newest-cni-543467"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.506830     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-543467"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.507499     736 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.509479     736 apiserver.go:52] "Watching apiserver"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.529187     736 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: E1124 04:19:30.537891     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-543467\" already exists" pod="kube-system/kube-controller-manager-newest-cni-543467"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.537934     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-543467"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: E1124 04:19:30.584506     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-543467\" already exists" pod="kube-system/kube-scheduler-newest-cni-543467"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.608015     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/298acecf-f8cf-46d2-bbfd-a73a057da8e8-cni-cfg\") pod \"kindnet-pzzgc\" (UID: \"298acecf-f8cf-46d2-bbfd-a73a057da8e8\") " pod="kube-system/kindnet-pzzgc"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.608165     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/298acecf-f8cf-46d2-bbfd-a73a057da8e8-xtables-lock\") pod \"kindnet-pzzgc\" (UID: \"298acecf-f8cf-46d2-bbfd-a73a057da8e8\") " pod="kube-system/kindnet-pzzgc"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.608298     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10608e3c-2678-4bf9-9225-5b6421a2204c-xtables-lock\") pod \"kube-proxy-m2jcg\" (UID: \"10608e3c-2678-4bf9-9225-5b6421a2204c\") " pod="kube-system/kube-proxy-m2jcg"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.608323     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/298acecf-f8cf-46d2-bbfd-a73a057da8e8-lib-modules\") pod \"kindnet-pzzgc\" (UID: \"298acecf-f8cf-46d2-bbfd-a73a057da8e8\") " pod="kube-system/kindnet-pzzgc"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.608466     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10608e3c-2678-4bf9-9225-5b6421a2204c-lib-modules\") pod \"kube-proxy-m2jcg\" (UID: \"10608e3c-2678-4bf9-9225-5b6421a2204c\") " pod="kube-system/kube-proxy-m2jcg"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: I1124 04:19:30.688523     736 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 24 04:19:30 newest-cni-543467 kubelet[736]: W1124 04:19:30.873661     736 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d5de64ccb4ee82439c3eb50ca5c7d34e9cf9c4369188379e453a3607df8d63aa/crio-4e9ab424f147422957507598c0dd60b4a38e23da863a68df338c9ce6a688db2a WatchSource:0}: Error finding container 4e9ab424f147422957507598c0dd60b4a38e23da863a68df338c9ce6a688db2a: Status 404 returned error can't find the container with id 4e9ab424f147422957507598c0dd60b4a38e23da863a68df338c9ce6a688db2a
	Nov 24 04:19:33 newest-cni-543467 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 04:19:33 newest-cni-543467 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 04:19:33 newest-cni-543467 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-543467 -n newest-cni-543467
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-543467 -n newest-cni-543467: exit status 2 (349.927003ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-543467 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-crwzn storage-provisioner dashboard-metrics-scraper-6ffb444bf9-mvxg9 kubernetes-dashboard-855c9754f9-sxpqd
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-543467 describe pod coredns-66bc5c9577-crwzn storage-provisioner dashboard-metrics-scraper-6ffb444bf9-mvxg9 kubernetes-dashboard-855c9754f9-sxpqd
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-543467 describe pod coredns-66bc5c9577-crwzn storage-provisioner dashboard-metrics-scraper-6ffb444bf9-mvxg9 kubernetes-dashboard-855c9754f9-sxpqd: exit status 1 (76.818092ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-crwzn" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-mvxg9" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-sxpqd" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-543467 describe pod coredns-66bc5c9577-crwzn storage-provisioner dashboard-metrics-scraper-6ffb444bf9-mvxg9 kubernetes-dashboard-855c9754f9-sxpqd: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-303179 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-303179 --alsologtostderr -v=1: exit status 80 (1.883980345s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-303179 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 04:20:44.601502  506589 out.go:360] Setting OutFile to fd 1 ...
	I1124 04:20:44.602716  506589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:20:44.602738  506589 out.go:374] Setting ErrFile to fd 2...
	I1124 04:20:44.602746  506589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:20:44.603127  506589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 04:20:44.603436  506589 out.go:368] Setting JSON to false
	I1124 04:20:44.603463  506589 mustload.go:66] Loading cluster: default-k8s-diff-port-303179
	I1124 04:20:44.604188  506589 config.go:182] Loaded profile config "default-k8s-diff-port-303179": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:20:44.604893  506589 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303179 --format={{.State.Status}}
	I1124 04:20:44.625053  506589 host.go:66] Checking if "default-k8s-diff-port-303179" exists ...
	I1124 04:20:44.625449  506589 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:20:44.685194  506589 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-24 04:20:44.675208685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:20:44.685909  506589 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763935228-21975/minikube-v1.37.0-1763935228-21975-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763935228-21975-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-303179 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 04:20:44.689633  506589 out.go:179] * Pausing node default-k8s-diff-port-303179 ... 
	I1124 04:20:44.693884  506589 host.go:66] Checking if "default-k8s-diff-port-303179" exists ...
	I1124 04:20:44.694349  506589 ssh_runner.go:195] Run: systemctl --version
	I1124 04:20:44.694407  506589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:20:44.712740  506589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa Username:docker}
	I1124 04:20:44.817386  506589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:20:44.832875  506589 pause.go:52] kubelet running: true
	I1124 04:20:44.832966  506589 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 04:20:45.134435  506589 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 04:20:45.134621  506589 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 04:20:45.296268  506589 cri.go:89] found id: "1382fa66d6c4e982dc6b4a55d8edbbf187a34c265672f91b49f816a794745593"
	I1124 04:20:45.296298  506589 cri.go:89] found id: "94e4284acaaa64a390d75968180bd4df33aa7cd9f0ad954d942f3d86db1a8dc9"
	I1124 04:20:45.296316  506589 cri.go:89] found id: "8ea36127f1e2a736a6fa13fdcd1a92bbbae15e2705dd51f2675d3440113d8abb"
	I1124 04:20:45.296323  506589 cri.go:89] found id: "77638956b16e6865b98ae01afe0403153860b4c41ae3b6f7f1ca46f8dbd2a939"
	I1124 04:20:45.296326  506589 cri.go:89] found id: "691c670515b3649d1af8f8fc34e9afc633ea3c1168b0515b53808ecd55c01c47"
	I1124 04:20:45.296330  506589 cri.go:89] found id: "e9dbfcfcc198e21983dc97bf121184dd3db9248de5fd970ee04f8ed5f32a25ed"
	I1124 04:20:45.296332  506589 cri.go:89] found id: "99ae33342ea980542a5d01e94e4e877f3a9a7f61e7804bf44fc417104b2c8f75"
	I1124 04:20:45.296336  506589 cri.go:89] found id: "5d9db75c10b0014f3fe772d0746170c1ac112901a7f81fceee9ad108d08be4d4"
	I1124 04:20:45.296339  506589 cri.go:89] found id: "7b5099e4fd3c18fc391ec751d92268e0f783642d7729eae47a7899934d2bf05a"
	I1124 04:20:45.296347  506589 cri.go:89] found id: "71c2ac2a77db96c635e8a7e09623f36fea99c4831f268be3cc0d4dd5cdcaa5d4"
	I1124 04:20:45.296354  506589 cri.go:89] found id: "e6e6068e06b191282d1459f1f15294aab079f0847575d0a311c401d7cff667c8"
	I1124 04:20:45.296357  506589 cri.go:89] found id: ""
	I1124 04:20:45.296457  506589 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 04:20:45.308795  506589 retry.go:31] will retry after 148.052932ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:20:45Z" level=error msg="open /run/runc: no such file or directory"
	I1124 04:20:45.457171  506589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:20:45.470684  506589 pause.go:52] kubelet running: false
	I1124 04:20:45.470775  506589 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 04:20:45.662396  506589 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 04:20:45.662539  506589 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 04:20:45.739856  506589 cri.go:89] found id: "1382fa66d6c4e982dc6b4a55d8edbbf187a34c265672f91b49f816a794745593"
	I1124 04:20:45.739885  506589 cri.go:89] found id: "94e4284acaaa64a390d75968180bd4df33aa7cd9f0ad954d942f3d86db1a8dc9"
	I1124 04:20:45.739892  506589 cri.go:89] found id: "8ea36127f1e2a736a6fa13fdcd1a92bbbae15e2705dd51f2675d3440113d8abb"
	I1124 04:20:45.739896  506589 cri.go:89] found id: "77638956b16e6865b98ae01afe0403153860b4c41ae3b6f7f1ca46f8dbd2a939"
	I1124 04:20:45.739899  506589 cri.go:89] found id: "691c670515b3649d1af8f8fc34e9afc633ea3c1168b0515b53808ecd55c01c47"
	I1124 04:20:45.739904  506589 cri.go:89] found id: "e9dbfcfcc198e21983dc97bf121184dd3db9248de5fd970ee04f8ed5f32a25ed"
	I1124 04:20:45.739920  506589 cri.go:89] found id: "99ae33342ea980542a5d01e94e4e877f3a9a7f61e7804bf44fc417104b2c8f75"
	I1124 04:20:45.739924  506589 cri.go:89] found id: "5d9db75c10b0014f3fe772d0746170c1ac112901a7f81fceee9ad108d08be4d4"
	I1124 04:20:45.739927  506589 cri.go:89] found id: "7b5099e4fd3c18fc391ec751d92268e0f783642d7729eae47a7899934d2bf05a"
	I1124 04:20:45.739935  506589 cri.go:89] found id: "71c2ac2a77db96c635e8a7e09623f36fea99c4831f268be3cc0d4dd5cdcaa5d4"
	I1124 04:20:45.739938  506589 cri.go:89] found id: "e6e6068e06b191282d1459f1f15294aab079f0847575d0a311c401d7cff667c8"
	I1124 04:20:45.739942  506589 cri.go:89] found id: ""
	I1124 04:20:45.740003  506589 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 04:20:45.751616  506589 retry.go:31] will retry after 360.211581ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:20:45Z" level=error msg="open /run/runc: no such file or directory"
	I1124 04:20:46.112146  506589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:20:46.125602  506589 pause.go:52] kubelet running: false
	I1124 04:20:46.125680  506589 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 04:20:46.321663  506589 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 04:20:46.321791  506589 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 04:20:46.392441  506589 cri.go:89] found id: "1382fa66d6c4e982dc6b4a55d8edbbf187a34c265672f91b49f816a794745593"
	I1124 04:20:46.392464  506589 cri.go:89] found id: "94e4284acaaa64a390d75968180bd4df33aa7cd9f0ad954d942f3d86db1a8dc9"
	I1124 04:20:46.392470  506589 cri.go:89] found id: "8ea36127f1e2a736a6fa13fdcd1a92bbbae15e2705dd51f2675d3440113d8abb"
	I1124 04:20:46.392474  506589 cri.go:89] found id: "77638956b16e6865b98ae01afe0403153860b4c41ae3b6f7f1ca46f8dbd2a939"
	I1124 04:20:46.392487  506589 cri.go:89] found id: "691c670515b3649d1af8f8fc34e9afc633ea3c1168b0515b53808ecd55c01c47"
	I1124 04:20:46.392492  506589 cri.go:89] found id: "e9dbfcfcc198e21983dc97bf121184dd3db9248de5fd970ee04f8ed5f32a25ed"
	I1124 04:20:46.392495  506589 cri.go:89] found id: "99ae33342ea980542a5d01e94e4e877f3a9a7f61e7804bf44fc417104b2c8f75"
	I1124 04:20:46.392499  506589 cri.go:89] found id: "5d9db75c10b0014f3fe772d0746170c1ac112901a7f81fceee9ad108d08be4d4"
	I1124 04:20:46.392502  506589 cri.go:89] found id: "7b5099e4fd3c18fc391ec751d92268e0f783642d7729eae47a7899934d2bf05a"
	I1124 04:20:46.392518  506589 cri.go:89] found id: "71c2ac2a77db96c635e8a7e09623f36fea99c4831f268be3cc0d4dd5cdcaa5d4"
	I1124 04:20:46.392524  506589 cri.go:89] found id: "e6e6068e06b191282d1459f1f15294aab079f0847575d0a311c401d7cff667c8"
	I1124 04:20:46.392528  506589 cri.go:89] found id: ""
	I1124 04:20:46.392585  506589 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 04:20:46.407560  506589 out.go:203] 
	W1124 04:20:46.410606  506589 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:20:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:20:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 04:20:46.410633  506589 out.go:285] * 
	* 
	W1124 04:20:46.416927  506589 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 04:20:46.419948  506589 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-303179 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-303179
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-303179:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c6af048d3f8eed722eea7223c0aae47b7c545b8922e6d139f9580c8eda525748",
	        "Created": "2025-11-24T04:17:56.199463475Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 501181,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T04:19:36.134917506Z",
	            "FinishedAt": "2025-11-24T04:19:35.094042629Z"
	        },
	        "Image": "sha256:fbb44bc62521f331457dff002aaa5e1e27856f9e53853b3b3ee62969be454028",
	        "ResolvConfPath": "/var/lib/docker/containers/c6af048d3f8eed722eea7223c0aae47b7c545b8922e6d139f9580c8eda525748/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c6af048d3f8eed722eea7223c0aae47b7c545b8922e6d139f9580c8eda525748/hostname",
	        "HostsPath": "/var/lib/docker/containers/c6af048d3f8eed722eea7223c0aae47b7c545b8922e6d139f9580c8eda525748/hosts",
	        "LogPath": "/var/lib/docker/containers/c6af048d3f8eed722eea7223c0aae47b7c545b8922e6d139f9580c8eda525748/c6af048d3f8eed722eea7223c0aae47b7c545b8922e6d139f9580c8eda525748-json.log",
	        "Name": "/default-k8s-diff-port-303179",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-303179:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-303179",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c6af048d3f8eed722eea7223c0aae47b7c545b8922e6d139f9580c8eda525748",
	                "LowerDir": "/var/lib/docker/overlay2/f795050361c122f8186f9d116815a241873f66c7dfed963bb16fb3ec6718f306-init/diff:/var/lib/docker/overlay2/d0aa28be488ed1454a92ae6ce1d851cbc3e880fd8a322d63788b1835a346ec13/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f795050361c122f8186f9d116815a241873f66c7dfed963bb16fb3ec6718f306/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f795050361c122f8186f9d116815a241873f66c7dfed963bb16fb3ec6718f306/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f795050361c122f8186f9d116815a241873f66c7dfed963bb16fb3ec6718f306/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-303179",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-303179/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-303179",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-303179",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-303179",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3e640710dbb3060c2921575558df3e127df7bd22606e056c820d04b626c1f3cc",
	            "SandboxKey": "/var/run/docker/netns/3e640710dbb3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-303179": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:06:64:0c:a7:66",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7dd701f3791fa7f6d8831a64698f944225df32ea42e663c9bfc78d30eb09b5d6",
	                    "EndpointID": "293ef22cb9810246222e3d80c92d7380f75c0de283695bc3203d3c5c4709eec6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-303179",
	                        "c6af048d3f8e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-303179 -n default-k8s-diff-port-303179
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-303179 -n default-k8s-diff-port-303179: exit status 2 (368.026218ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-303179 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-303179 logs -n 25: (1.355813567s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-600301 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │                     │
	│ delete  │ -p no-preload-600301                                                                                                                                                                                                                          │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ delete  │ -p no-preload-600301                                                                                                                                                                                                                          │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ delete  │ -p disable-driver-mounts-995056                                                                                                                                                                                                               │ disable-driver-mounts-995056 │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ start   │ -p default-k8s-diff-port-303179 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-303179 │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:19 UTC │
	│ image   │ embed-certs-520529 image list --format=json                                                                                                                                                                                                   │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │ 24 Nov 25 04:18 UTC │
	│ pause   │ -p embed-certs-520529 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │                     │
	│ delete  │ -p embed-certs-520529                                                                                                                                                                                                                         │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │ 24 Nov 25 04:18 UTC │
	│ delete  │ -p embed-certs-520529                                                                                                                                                                                                                         │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │ 24 Nov 25 04:18 UTC │
	│ start   │ -p newest-cni-543467 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │ 24 Nov 25 04:19 UTC │
	│ addons  │ enable metrics-server -p newest-cni-543467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │                     │
	│ stop    │ -p newest-cni-543467 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:19 UTC │
	│ addons  │ enable dashboard -p newest-cni-543467 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:19 UTC │
	│ start   │ -p newest-cni-543467 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:19 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-303179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-303179 │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-303179 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-303179 │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:19 UTC │
	│ image   │ newest-cni-543467 image list --format=json                                                                                                                                                                                                    │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:19 UTC │
	│ pause   │ -p newest-cni-543467 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-303179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-303179 │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:19 UTC │
	│ start   │ -p default-k8s-diff-port-303179 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-303179 │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:20 UTC │
	│ delete  │ -p newest-cni-543467                                                                                                                                                                                                                          │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:19 UTC │
	│ delete  │ -p newest-cni-543467                                                                                                                                                                                                                          │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:19 UTC │
	│ start   │ -p auto-778509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-778509                  │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │                     │
	│ image   │ default-k8s-diff-port-303179 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-303179 │ jenkins │ v1.37.0 │ 24 Nov 25 04:20 UTC │ 24 Nov 25 04:20 UTC │
	│ pause   │ -p default-k8s-diff-port-303179 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-303179 │ jenkins │ v1.37.0 │ 24 Nov 25 04:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 04:19:41
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 04:19:41.714561  502762 out.go:360] Setting OutFile to fd 1 ...
	I1124 04:19:41.714787  502762 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:19:41.714820  502762 out.go:374] Setting ErrFile to fd 2...
	I1124 04:19:41.714843  502762 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:19:41.715126  502762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 04:19:41.715597  502762 out.go:368] Setting JSON to false
	I1124 04:19:41.716579  502762 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10911,"bootTime":1763947071,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 04:19:41.716691  502762 start.go:143] virtualization:  
	I1124 04:19:41.720399  502762 out.go:179] * [auto-778509] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 04:19:41.723984  502762 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 04:19:41.724063  502762 notify.go:221] Checking for updates...
	I1124 04:19:41.731442  502762 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 04:19:41.734577  502762 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:19:41.737841  502762 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	I1124 04:19:41.740942  502762 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 04:19:41.743852  502762 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 04:19:41.747232  502762 config.go:182] Loaded profile config "default-k8s-diff-port-303179": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:19:41.747374  502762 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 04:19:41.792168  502762 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 04:19:41.792292  502762 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:19:41.881715  502762 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-24 04:19:41.868627583 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:19:41.881866  502762 docker.go:319] overlay module found
	I1124 04:19:41.885136  502762 out.go:179] * Using the docker driver based on user configuration
	I1124 04:19:41.888063  502762 start.go:309] selected driver: docker
	I1124 04:19:41.888080  502762 start.go:927] validating driver "docker" against <nil>
	I1124 04:19:41.888094  502762 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 04:19:41.888865  502762 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:19:41.980590  502762 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-24 04:19:41.970535249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:19:41.980737  502762 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 04:19:41.980954  502762 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 04:19:41.983793  502762 out.go:179] * Using Docker driver with root privileges
	I1124 04:19:41.986794  502762 cni.go:84] Creating CNI manager for ""
	I1124 04:19:41.986883  502762 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:19:41.986897  502762 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 04:19:41.986994  502762 start.go:353] cluster config:
	{Name:auto-778509 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-778509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1124 04:19:41.990241  502762 out.go:179] * Starting "auto-778509" primary control-plane node in "auto-778509" cluster
	I1124 04:19:41.993106  502762 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 04:19:41.996095  502762 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 04:19:41.998914  502762 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:19:41.998986  502762 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 04:19:41.999002  502762 cache.go:65] Caching tarball of preloaded images
	I1124 04:19:41.999106  502762 preload.go:238] Found /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 04:19:41.999123  502762 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 04:19:41.999258  502762 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/config.json ...
	I1124 04:19:41.999287  502762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/config.json: {Name:mkc5b7fac5f8da08cfeeb4fbe9dcebf6c531abcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:41.999487  502762 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 04:19:42.029240  502762 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 04:19:42.029271  502762 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 04:19:42.029288  502762 cache.go:243] Successfully downloaded all kic artifacts
	I1124 04:19:42.029328  502762 start.go:360] acquireMachinesLock for auto-778509: {Name:mkfa7cae0269d4581c03d0cc14aab7d3f8ab8b40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 04:19:42.029442  502762 start.go:364] duration metric: took 92.441µs to acquireMachinesLock for "auto-778509"
	I1124 04:19:42.029474  502762 start.go:93] Provisioning new machine with config: &{Name:auto-778509 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-778509 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 04:19:42.029553  502762 start.go:125] createHost starting for "" (driver="docker")
	I1124 04:19:41.091576  500996 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 04:19:41.091603  500996 machine.go:97] duration metric: took 4.563202446s to provisionDockerMachine
	I1124 04:19:41.091615  500996 start.go:293] postStartSetup for "default-k8s-diff-port-303179" (driver="docker")
	I1124 04:19:41.091627  500996 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 04:19:41.091716  500996 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 04:19:41.091766  500996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:19:41.125225  500996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa Username:docker}
	I1124 04:19:41.261038  500996 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 04:19:41.264526  500996 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 04:19:41.264574  500996 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 04:19:41.264586  500996 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/addons for local assets ...
	I1124 04:19:41.264640  500996 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/files for local assets ...
	I1124 04:19:41.264727  500996 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem -> 2913892.pem in /etc/ssl/certs
	I1124 04:19:41.264835  500996 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 04:19:41.272257  500996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:19:41.299880  500996 start.go:296] duration metric: took 208.249493ms for postStartSetup
	I1124 04:19:41.299977  500996 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 04:19:41.300023  500996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:19:41.320519  500996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa Username:docker}
	I1124 04:19:41.424440  500996 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 04:19:41.430173  500996 fix.go:56] duration metric: took 5.366786919s for fixHost
	I1124 04:19:41.430201  500996 start.go:83] releasing machines lock for "default-k8s-diff-port-303179", held for 5.366841475s
	I1124 04:19:41.430268  500996 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-303179
	I1124 04:19:41.457123  500996 ssh_runner.go:195] Run: cat /version.json
	I1124 04:19:41.457184  500996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:19:41.457465  500996 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 04:19:41.457521  500996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:19:41.489367  500996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa Username:docker}
	I1124 04:19:41.506271  500996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa Username:docker}
	I1124 04:19:41.695530  500996 ssh_runner.go:195] Run: systemctl --version
	I1124 04:19:41.702211  500996 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 04:19:41.747854  500996 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 04:19:41.754158  500996 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 04:19:41.754234  500996 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 04:19:41.763789  500996 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 04:19:41.763817  500996 start.go:496] detecting cgroup driver to use...
	I1124 04:19:41.763849  500996 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 04:19:41.763912  500996 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 04:19:41.782997  500996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 04:19:41.805232  500996 docker.go:218] disabling cri-docker service (if available) ...
	I1124 04:19:41.805294  500996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 04:19:41.834508  500996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 04:19:41.858970  500996 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 04:19:42.047602  500996 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 04:19:42.253237  500996 docker.go:234] disabling docker service ...
	I1124 04:19:42.253325  500996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 04:19:42.273360  500996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 04:19:42.289502  500996 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 04:19:42.470535  500996 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 04:19:42.647293  500996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 04:19:42.659972  500996 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 04:19:42.679686  500996 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 04:19:42.679751  500996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:42.693117  500996 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 04:19:42.693186  500996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:42.706190  500996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:42.721986  500996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:42.731746  500996 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 04:19:42.769860  500996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:42.796741  500996 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:42.806041  500996 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:42.825062  500996 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 04:19:42.833856  500996 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 04:19:42.842563  500996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:19:43.031058  500996 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 04:19:43.262697  500996 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 04:19:43.262769  500996 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 04:19:43.267143  500996 start.go:564] Will wait 60s for crictl version
	I1124 04:19:43.267218  500996 ssh_runner.go:195] Run: which crictl
	I1124 04:19:43.271618  500996 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 04:19:43.304681  500996 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 04:19:43.304762  500996 ssh_runner.go:195] Run: crio --version
	I1124 04:19:43.336279  500996 ssh_runner.go:195] Run: crio --version
	I1124 04:19:43.374027  500996 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 04:19:43.376931  500996 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-303179 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 04:19:43.396047  500996 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 04:19:43.400619  500996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:19:43.410613  500996 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-303179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303179 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 04:19:43.410755  500996 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:19:43.410807  500996 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:19:43.451172  500996 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:19:43.451194  500996 crio.go:433] Images already preloaded, skipping extraction
	I1124 04:19:43.451252  500996 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:19:43.480316  500996 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:19:43.480389  500996 cache_images.go:86] Images are preloaded, skipping loading
	I1124 04:19:43.480411  500996 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1124 04:19:43.480550  500996 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-303179 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 04:19:43.480672  500996 ssh_runner.go:195] Run: crio config
	I1124 04:19:43.548149  500996 cni.go:84] Creating CNI manager for ""
	I1124 04:19:43.548219  500996 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:19:43.548275  500996 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 04:19:43.548318  500996 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-303179 NodeName:default-k8s-diff-port-303179 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 04:19:43.548515  500996 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-303179"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 04:19:43.548634  500996 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 04:19:43.557227  500996 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 04:19:43.557348  500996 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 04:19:43.565398  500996 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1124 04:19:43.579269  500996 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 04:19:43.592983  500996 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1124 04:19:43.606361  500996 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 04:19:43.611101  500996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:19:43.620974  500996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:19:43.770597  500996 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:19:43.788250  500996 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179 for IP: 192.168.85.2
	I1124 04:19:43.788324  500996 certs.go:195] generating shared ca certs ...
	I1124 04:19:43.788357  500996 certs.go:227] acquiring lock for ca certs: {Name:mk13d1a72b5c7901cf99d88e558f62c6b8512807 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:43.788543  500996 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key
	I1124 04:19:43.788642  500996 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key
	I1124 04:19:43.788670  500996 certs.go:257] generating profile certs ...
	I1124 04:19:43.788807  500996 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/client.key
	I1124 04:19:43.788916  500996 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.key.0cae04f4
	I1124 04:19:43.789023  500996 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/proxy-client.key
	I1124 04:19:43.789196  500996 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem (1338 bytes)
	W1124 04:19:43.789271  500996 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389_empty.pem, impossibly tiny 0 bytes
	I1124 04:19:43.789300  500996 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 04:19:43.789374  500996 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem (1082 bytes)
	I1124 04:19:43.789432  500996 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem (1123 bytes)
	I1124 04:19:43.789498  500996 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem (1675 bytes)
	I1124 04:19:43.789589  500996 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:19:43.790408  500996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 04:19:43.858294  500996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 04:19:43.887335  500996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 04:19:43.923632  500996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 04:19:43.951242  500996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 04:19:43.990988  500996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 04:19:44.016864  500996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 04:19:44.037994  500996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 04:19:44.089306  500996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /usr/share/ca-certificates/2913892.pem (1708 bytes)
	I1124 04:19:44.155766  500996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 04:19:44.184757  500996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem --> /usr/share/ca-certificates/291389.pem (1338 bytes)
	I1124 04:19:44.207627  500996 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 04:19:44.222325  500996 ssh_runner.go:195] Run: openssl version
	I1124 04:19:44.229324  500996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2913892.pem && ln -fs /usr/share/ca-certificates/2913892.pem /etc/ssl/certs/2913892.pem"
	I1124 04:19:44.238997  500996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2913892.pem
	I1124 04:19:44.243258  500996 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 03:20 /usr/share/ca-certificates/2913892.pem
	I1124 04:19:44.243341  500996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2913892.pem
	I1124 04:19:44.301149  500996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2913892.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 04:19:44.311769  500996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 04:19:44.321037  500996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:19:44.325315  500996 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 03:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:19:44.325385  500996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:19:44.367344  500996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 04:19:44.376609  500996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291389.pem && ln -fs /usr/share/ca-certificates/291389.pem /etc/ssl/certs/291389.pem"
	I1124 04:19:44.386665  500996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291389.pem
	I1124 04:19:44.391184  500996 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 03:20 /usr/share/ca-certificates/291389.pem
	I1124 04:19:44.391262  500996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291389.pem
	I1124 04:19:44.435577  500996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291389.pem /etc/ssl/certs/51391683.0"
	I1124 04:19:44.444653  500996 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 04:19:44.450874  500996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 04:19:44.493512  500996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 04:19:44.536190  500996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 04:19:44.592267  500996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 04:19:44.688152  500996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 04:19:44.770155  500996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 04:19:44.868237  500996 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-303179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303179 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:19:44.868339  500996 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 04:19:44.868400  500996 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 04:19:44.986834  500996 cri.go:89] found id: "e9dbfcfcc198e21983dc97bf121184dd3db9248de5fd970ee04f8ed5f32a25ed"
	I1124 04:19:44.986872  500996 cri.go:89] found id: "99ae33342ea980542a5d01e94e4e877f3a9a7f61e7804bf44fc417104b2c8f75"
	I1124 04:19:44.986880  500996 cri.go:89] found id: "5d9db75c10b0014f3fe772d0746170c1ac112901a7f81fceee9ad108d08be4d4"
	I1124 04:19:44.986883  500996 cri.go:89] found id: "7b5099e4fd3c18fc391ec751d92268e0f783642d7729eae47a7899934d2bf05a"
	I1124 04:19:44.986887  500996 cri.go:89] found id: ""
	I1124 04:19:44.986943  500996 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 04:19:45.049575  500996 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:19:45Z" level=error msg="open /run/runc: no such file or directory"
	I1124 04:19:45.049691  500996 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 04:19:45.094902  500996 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 04:19:45.094926  500996 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 04:19:45.094992  500996 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 04:19:45.133776  500996 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 04:19:45.134221  500996 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-303179" does not appear in /home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:19:45.134350  500996 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-289526/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-303179" cluster setting kubeconfig missing "default-k8s-diff-port-303179" context setting]
	I1124 04:19:45.134730  500996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/kubeconfig: {Name:mkb927d54c1c5489b1562496c31c4a11f46f8c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:45.136521  500996 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 04:19:45.187361  500996 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1124 04:19:45.187455  500996 kubeadm.go:602] duration metric: took 92.51304ms to restartPrimaryControlPlane
	I1124 04:19:45.187480  500996 kubeadm.go:403] duration metric: took 319.249917ms to StartCluster
	I1124 04:19:45.187525  500996 settings.go:142] acquiring lock: {Name:mkbb4fe81234aed150705ba63a85f83232f71688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:45.187676  500996 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:19:45.188444  500996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/kubeconfig: {Name:mkb927d54c1c5489b1562496c31c4a11f46f8c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:45.188707  500996 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 04:19:45.189131  500996 config.go:182] Loaded profile config "default-k8s-diff-port-303179": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:19:45.189142  500996 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 04:19:45.189261  500996 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-303179"
	I1124 04:19:45.189280  500996 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-303179"
	W1124 04:19:45.189288  500996 addons.go:248] addon storage-provisioner should already be in state true
	I1124 04:19:45.189316  500996 host.go:66] Checking if "default-k8s-diff-port-303179" exists ...
	I1124 04:19:45.189323  500996 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-303179"
	I1124 04:19:45.189339  500996 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-303179"
	W1124 04:19:45.189346  500996 addons.go:248] addon dashboard should already be in state true
	I1124 04:19:45.189366  500996 host.go:66] Checking if "default-k8s-diff-port-303179" exists ...
	I1124 04:19:45.189865  500996 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303179 --format={{.State.Status}}
	I1124 04:19:45.189949  500996 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303179 --format={{.State.Status}}
	I1124 04:19:45.194712  500996 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-303179"
	I1124 04:19:45.194755  500996 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-303179"
	I1124 04:19:45.195159  500996 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303179 --format={{.State.Status}}
	I1124 04:19:45.195426  500996 out.go:179] * Verifying Kubernetes components...
	I1124 04:19:45.202347  500996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:19:45.247063  500996 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 04:19:45.250492  500996 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:19:45.250522  500996 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 04:19:45.250615  500996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:19:45.281634  500996 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 04:19:45.285053  500996 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 04:19:45.294008  500996 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 04:19:45.294035  500996 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 04:19:45.294257  500996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:19:45.295301  500996 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-303179"
	W1124 04:19:45.295322  500996 addons.go:248] addon default-storageclass should already be in state true
	I1124 04:19:45.295349  500996 host.go:66] Checking if "default-k8s-diff-port-303179" exists ...
	I1124 04:19:45.295792  500996 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303179 --format={{.State.Status}}
	I1124 04:19:45.328139  500996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa Username:docker}
	I1124 04:19:45.356022  500996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa Username:docker}
	I1124 04:19:45.362065  500996 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 04:19:45.362092  500996 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 04:19:45.362171  500996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:19:45.385607  500996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa Username:docker}
	I1124 04:19:42.034223  502762 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 04:19:42.034617  502762 start.go:159] libmachine.API.Create for "auto-778509" (driver="docker")
	I1124 04:19:42.034677  502762 client.go:173] LocalClient.Create starting
	I1124 04:19:42.034783  502762 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem
	I1124 04:19:42.034847  502762 main.go:143] libmachine: Decoding PEM data...
	I1124 04:19:42.034884  502762 main.go:143] libmachine: Parsing certificate...
	I1124 04:19:42.034983  502762 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem
	I1124 04:19:42.035033  502762 main.go:143] libmachine: Decoding PEM data...
	I1124 04:19:42.035053  502762 main.go:143] libmachine: Parsing certificate...
	I1124 04:19:42.035532  502762 cli_runner.go:164] Run: docker network inspect auto-778509 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 04:19:42.057551  502762 cli_runner.go:211] docker network inspect auto-778509 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 04:19:42.057652  502762 network_create.go:284] running [docker network inspect auto-778509] to gather additional debugging logs...
	I1124 04:19:42.057673  502762 cli_runner.go:164] Run: docker network inspect auto-778509
	W1124 04:19:42.077547  502762 cli_runner.go:211] docker network inspect auto-778509 returned with exit code 1
	I1124 04:19:42.077590  502762 network_create.go:287] error running [docker network inspect auto-778509]: docker network inspect auto-778509: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-778509 not found
	I1124 04:19:42.077607  502762 network_create.go:289] output of [docker network inspect auto-778509]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-778509 not found
	
	** /stderr **
	I1124 04:19:42.077733  502762 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 04:19:42.102281  502762 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-740fb099fccc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:52:7a:9c:b0:4d:41} reservation:<nil>}
	I1124 04:19:42.102791  502762 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4b0f25a7c590 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:53:b3:a1:55:1a} reservation:<nil>}
	I1124 04:19:42.103084  502762 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c1d995330d2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:83:d9:0c:83:10} reservation:<nil>}
	I1124 04:19:42.103532  502762 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a6a4e0}
	I1124 04:19:42.103553  502762 network_create.go:124] attempt to create docker network auto-778509 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1124 04:19:42.103613  502762 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-778509 auto-778509
	I1124 04:19:42.197872  502762 network_create.go:108] docker network auto-778509 192.168.76.0/24 created
	I1124 04:19:42.197915  502762 kic.go:121] calculated static IP "192.168.76.2" for the "auto-778509" container
	I1124 04:19:42.197998  502762 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 04:19:42.225948  502762 cli_runner.go:164] Run: docker volume create auto-778509 --label name.minikube.sigs.k8s.io=auto-778509 --label created_by.minikube.sigs.k8s.io=true
	I1124 04:19:42.258689  502762 oci.go:103] Successfully created a docker volume auto-778509
	I1124 04:19:42.258776  502762 cli_runner.go:164] Run: docker run --rm --name auto-778509-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-778509 --entrypoint /usr/bin/test -v auto-778509:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 04:19:42.875932  502762 oci.go:107] Successfully prepared a docker volume auto-778509
	I1124 04:19:42.875993  502762 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:19:42.876006  502762 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 04:19:42.876073  502762 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-778509:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 04:19:45.761317  500996 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:19:45.788586  500996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 04:19:45.813271  500996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:19:45.919766  500996 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 04:19:45.919802  500996 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 04:19:45.925214  500996 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-303179" to be "Ready" ...
	I1124 04:19:46.131204  500996 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 04:19:46.131234  500996 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 04:19:46.224377  500996 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 04:19:46.224415  500996 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 04:19:46.292863  500996 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 04:19:46.292904  500996 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 04:19:46.350264  500996 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 04:19:46.350292  500996 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 04:19:46.395116  500996 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 04:19:46.395143  500996 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 04:19:46.450354  500996 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 04:19:46.450390  500996 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 04:19:46.479382  500996 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 04:19:46.479409  500996 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 04:19:46.503776  500996 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 04:19:46.503807  500996 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 04:19:46.518702  500996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 04:19:48.397833  502762 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-778509:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (5.521707022s)
	I1124 04:19:48.397883  502762 kic.go:203] duration metric: took 5.521872061s to extract preloaded images to volume ...
	W1124 04:19:48.398039  502762 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 04:19:48.398150  502762 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 04:19:48.511173  502762 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-778509 --name auto-778509 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-778509 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-778509 --network auto-778509 --ip 192.168.76.2 --volume auto-778509:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 04:19:48.958585  502762 cli_runner.go:164] Run: docker container inspect auto-778509 --format={{.State.Running}}
	I1124 04:19:48.980560  502762 cli_runner.go:164] Run: docker container inspect auto-778509 --format={{.State.Status}}
	I1124 04:19:49.021781  502762 cli_runner.go:164] Run: docker exec auto-778509 stat /var/lib/dpkg/alternatives/iptables
	I1124 04:19:49.101621  502762 oci.go:144] the created container "auto-778509" has a running status.
	I1124 04:19:49.101656  502762 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-289526/.minikube/machines/auto-778509/id_rsa...
	I1124 04:19:49.505099  502762 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-289526/.minikube/machines/auto-778509/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 04:19:49.541949  502762 cli_runner.go:164] Run: docker container inspect auto-778509 --format={{.State.Status}}
	I1124 04:19:49.574957  502762 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 04:19:49.574983  502762 kic_runner.go:114] Args: [docker exec --privileged auto-778509 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 04:19:49.652064  502762 cli_runner.go:164] Run: docker container inspect auto-778509 --format={{.State.Status}}
	I1124 04:19:49.684078  502762 machine.go:94] provisionDockerMachine start ...
	I1124 04:19:49.684169  502762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-778509
	I1124 04:19:49.708173  502762 main.go:143] libmachine: Using SSH client type: native
	I1124 04:19:49.708504  502762 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1124 04:19:49.708513  502762 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 04:19:49.709201  502762 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 04:19:51.643750  500996 node_ready.go:49] node "default-k8s-diff-port-303179" is "Ready"
	I1124 04:19:51.643779  500996 node_ready.go:38] duration metric: took 5.718530892s for node "default-k8s-diff-port-303179" to be "Ready" ...
	I1124 04:19:51.643793  500996 api_server.go:52] waiting for apiserver process to appear ...
	I1124 04:19:51.643852  500996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 04:19:51.868994  500996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.080369953s)
	I1124 04:19:53.182988  500996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.369677982s)
	I1124 04:19:53.183125  500996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.664388185s)
	I1124 04:19:53.183257  500996 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.539393244s)
	I1124 04:19:53.183277  500996 api_server.go:72] duration metric: took 7.994534401s to wait for apiserver process to appear ...
	I1124 04:19:53.183284  500996 api_server.go:88] waiting for apiserver healthz status ...
	I1124 04:19:53.183305  500996 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1124 04:19:53.186530  500996 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-303179 addons enable metrics-server
	
	I1124 04:19:53.189348  500996 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1124 04:19:53.192264  500996 addons.go:530] duration metric: took 8.003124691s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1124 04:19:53.202819  500996 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 04:19:53.202851  500996 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 04:19:53.684024  500996 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1124 04:19:53.693702  500996 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1124 04:19:53.694831  500996 api_server.go:141] control plane version: v1.34.1
	I1124 04:19:53.694859  500996 api_server.go:131] duration metric: took 511.564647ms to wait for apiserver health ...
	I1124 04:19:53.694872  500996 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 04:19:53.701883  500996 system_pods.go:59] 8 kube-system pods found
	I1124 04:19:53.701927  500996 system_pods.go:61] "coredns-66bc5c9577-jtn7v" [cd5d148d-8e9e-4bac-a54c-d71637a8cb0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:19:53.701942  500996 system_pods.go:61] "etcd-default-k8s-diff-port-303179" [e10607ab-490f-4a61-a1f9-a3c5c06f86b7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 04:19:53.701948  500996 system_pods.go:61] "kindnet-wpp6p" [0a1f5799-1a90-4c0a-a0a3-9508d80fd8f3] Running
	I1124 04:19:53.701960  500996 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-303179" [6f48a510-e83c-4667-a542-5953227201ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 04:19:53.701967  500996 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-303179" [6f1d9347-dbe0-4770-b829-de7cf4fe9934] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 04:19:53.701977  500996 system_pods.go:61] "kube-proxy-dxbvb" [24177ca5-eb2f-4ac2-a32c-d384781bad58] Running
	I1124 04:19:53.701985  500996 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-303179" [b819c0ad-3c09-46e4-84a8-e7f1ad21b768] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 04:19:53.701989  500996 system_pods.go:61] "storage-provisioner" [4d7d1174-e169-4297-a8a2-55a47f03d9d6] Running
	I1124 04:19:53.701995  500996 system_pods.go:74] duration metric: took 7.112865ms to wait for pod list to return data ...
	I1124 04:19:53.702007  500996 default_sa.go:34] waiting for default service account to be created ...
	I1124 04:19:53.710162  500996 default_sa.go:45] found service account: "default"
	I1124 04:19:53.710191  500996 default_sa.go:55] duration metric: took 8.176615ms for default service account to be created ...
	I1124 04:19:53.710208  500996 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 04:19:53.717323  500996 system_pods.go:86] 8 kube-system pods found
	I1124 04:19:53.717361  500996 system_pods.go:89] "coredns-66bc5c9577-jtn7v" [cd5d148d-8e9e-4bac-a54c-d71637a8cb0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:19:53.717371  500996 system_pods.go:89] "etcd-default-k8s-diff-port-303179" [e10607ab-490f-4a61-a1f9-a3c5c06f86b7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 04:19:53.717377  500996 system_pods.go:89] "kindnet-wpp6p" [0a1f5799-1a90-4c0a-a0a3-9508d80fd8f3] Running
	I1124 04:19:53.717384  500996 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-303179" [6f48a510-e83c-4667-a542-5953227201ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 04:19:53.717391  500996 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-303179" [6f1d9347-dbe0-4770-b829-de7cf4fe9934] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 04:19:53.717396  500996 system_pods.go:89] "kube-proxy-dxbvb" [24177ca5-eb2f-4ac2-a32c-d384781bad58] Running
	I1124 04:19:53.717402  500996 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-303179" [b819c0ad-3c09-46e4-84a8-e7f1ad21b768] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 04:19:53.717407  500996 system_pods.go:89] "storage-provisioner" [4d7d1174-e169-4297-a8a2-55a47f03d9d6] Running
	I1124 04:19:53.717415  500996 system_pods.go:126] duration metric: took 7.199931ms to wait for k8s-apps to be running ...
	I1124 04:19:53.717432  500996 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 04:19:53.717488  500996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:19:53.732908  500996 system_svc.go:56] duration metric: took 15.477675ms WaitForService to wait for kubelet
	I1124 04:19:53.732946  500996 kubeadm.go:587] duration metric: took 8.544193668s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 04:19:53.732965  500996 node_conditions.go:102] verifying NodePressure condition ...
	I1124 04:19:53.742917  500996 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 04:19:53.742953  500996 node_conditions.go:123] node cpu capacity is 2
	I1124 04:19:53.742966  500996 node_conditions.go:105] duration metric: took 9.995813ms to run NodePressure ...
	I1124 04:19:53.742980  500996 start.go:242] waiting for startup goroutines ...
	I1124 04:19:53.742988  500996 start.go:247] waiting for cluster config update ...
	I1124 04:19:53.743001  500996 start.go:256] writing updated cluster config ...
	I1124 04:19:53.743269  500996 ssh_runner.go:195] Run: rm -f paused
	I1124 04:19:53.748360  500996 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 04:19:53.753936  500996 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jtn7v" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:19:52.886701  502762 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-778509
	
	I1124 04:19:52.886728  502762 ubuntu.go:182] provisioning hostname "auto-778509"
	I1124 04:19:52.886817  502762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-778509
	I1124 04:19:52.911383  502762 main.go:143] libmachine: Using SSH client type: native
	I1124 04:19:52.911707  502762 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1124 04:19:52.911725  502762 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-778509 && echo "auto-778509" | sudo tee /etc/hostname
	I1124 04:19:53.109055  502762 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-778509
	
	I1124 04:19:53.109218  502762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-778509
	I1124 04:19:53.137152  502762 main.go:143] libmachine: Using SSH client type: native
	I1124 04:19:53.137459  502762 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1124 04:19:53.137475  502762 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-778509' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-778509/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-778509' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 04:19:53.306528  502762 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 04:19:53.306596  502762 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-289526/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-289526/.minikube}
	I1124 04:19:53.306640  502762 ubuntu.go:190] setting up certificates
	I1124 04:19:53.306693  502762 provision.go:84] configureAuth start
	I1124 04:19:53.306780  502762 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-778509
	I1124 04:19:53.336318  502762 provision.go:143] copyHostCerts
	I1124 04:19:53.336376  502762 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem, removing ...
	I1124 04:19:53.336385  502762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem
	I1124 04:19:53.336472  502762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem (1082 bytes)
	I1124 04:19:53.336574  502762 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem, removing ...
	I1124 04:19:53.336580  502762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem
	I1124 04:19:53.336612  502762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem (1123 bytes)
	I1124 04:19:53.336669  502762 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem, removing ...
	I1124 04:19:53.336674  502762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem
	I1124 04:19:53.336698  502762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem (1675 bytes)
	I1124 04:19:53.336753  502762 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem org=jenkins.auto-778509 san=[127.0.0.1 192.168.76.2 auto-778509 localhost minikube]
	I1124 04:19:53.721754  502762 provision.go:177] copyRemoteCerts
	I1124 04:19:53.721861  502762 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 04:19:53.721934  502762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-778509
	I1124 04:19:53.754731  502762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/auto-778509/id_rsa Username:docker}
	I1124 04:19:53.859301  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 04:19:53.879396  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 04:19:53.910331  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1124 04:19:53.928398  502762 provision.go:87] duration metric: took 621.664588ms to configureAuth
	I1124 04:19:53.928424  502762 ubuntu.go:206] setting minikube options for container-runtime
	I1124 04:19:53.928612  502762 config.go:182] Loaded profile config "auto-778509": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:19:53.928715  502762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-778509
	I1124 04:19:53.946747  502762 main.go:143] libmachine: Using SSH client type: native
	I1124 04:19:53.947065  502762 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1124 04:19:53.947097  502762 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 04:19:54.258804  502762 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 04:19:54.258884  502762 machine.go:97] duration metric: took 4.574784507s to provisionDockerMachine
	I1124 04:19:54.258910  502762 client.go:176] duration metric: took 12.224218341s to LocalClient.Create
	I1124 04:19:54.258962  502762 start.go:167] duration metric: took 12.224347408s to libmachine.API.Create "auto-778509"
	I1124 04:19:54.258977  502762 start.go:293] postStartSetup for "auto-778509" (driver="docker")
	I1124 04:19:54.258987  502762 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 04:19:54.259050  502762 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 04:19:54.259100  502762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-778509
	I1124 04:19:54.277972  502762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/auto-778509/id_rsa Username:docker}
	I1124 04:19:54.387362  502762 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 04:19:54.390633  502762 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 04:19:54.390717  502762 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 04:19:54.390747  502762 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/addons for local assets ...
	I1124 04:19:54.390802  502762 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/files for local assets ...
	I1124 04:19:54.390901  502762 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem -> 2913892.pem in /etc/ssl/certs
	I1124 04:19:54.391010  502762 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 04:19:54.398579  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:19:54.417007  502762 start.go:296] duration metric: took 158.014206ms for postStartSetup
	I1124 04:19:54.417421  502762 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-778509
	I1124 04:19:54.436970  502762 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/config.json ...
	I1124 04:19:54.437267  502762 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 04:19:54.437324  502762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-778509
	I1124 04:19:54.454426  502762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/auto-778509/id_rsa Username:docker}
	I1124 04:19:54.555433  502762 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 04:19:54.560398  502762 start.go:128] duration metric: took 12.530827244s to createHost
	I1124 04:19:54.560424  502762 start.go:83] releasing machines lock for "auto-778509", held for 12.530969653s
	I1124 04:19:54.560506  502762 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-778509
	I1124 04:19:54.577713  502762 ssh_runner.go:195] Run: cat /version.json
	I1124 04:19:54.577769  502762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-778509
	I1124 04:19:54.578066  502762 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 04:19:54.578126  502762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-778509
	I1124 04:19:54.596570  502762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/auto-778509/id_rsa Username:docker}
	I1124 04:19:54.615102  502762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/auto-778509/id_rsa Username:docker}
	I1124 04:19:54.702260  502762 ssh_runner.go:195] Run: systemctl --version
	I1124 04:19:54.792180  502762 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 04:19:54.827165  502762 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 04:19:54.831912  502762 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 04:19:54.832033  502762 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 04:19:54.893192  502762 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 04:19:54.893211  502762 start.go:496] detecting cgroup driver to use...
	I1124 04:19:54.893245  502762 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 04:19:54.893291  502762 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 04:19:54.916849  502762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 04:19:54.935247  502762 docker.go:218] disabling cri-docker service (if available) ...
	I1124 04:19:54.935306  502762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 04:19:54.955604  502762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 04:19:54.975724  502762 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 04:19:55.122650  502762 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 04:19:55.249674  502762 docker.go:234] disabling docker service ...
	I1124 04:19:55.249791  502762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 04:19:55.273380  502762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 04:19:55.287785  502762 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 04:19:55.409530  502762 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 04:19:55.556079  502762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 04:19:55.571109  502762 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 04:19:55.597444  502762 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 04:19:55.597539  502762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:55.616243  502762 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 04:19:55.616352  502762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:55.633202  502762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:55.643954  502762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:55.659551  502762 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 04:19:55.673295  502762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:55.684007  502762 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:55.704229  502762 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:55.715135  502762 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 04:19:55.722949  502762 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 04:19:55.732001  502762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:19:55.869324  502762 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 04:19:56.065366  502762 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 04:19:56.065444  502762 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 04:19:56.069454  502762 start.go:564] Will wait 60s for crictl version
	I1124 04:19:56.069527  502762 ssh_runner.go:195] Run: which crictl
	I1124 04:19:56.073414  502762 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 04:19:56.103902  502762 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 04:19:56.104075  502762 ssh_runner.go:195] Run: crio --version
	I1124 04:19:56.138255  502762 ssh_runner.go:195] Run: crio --version
	I1124 04:19:56.172724  502762 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 04:19:56.175712  502762 cli_runner.go:164] Run: docker network inspect auto-778509 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 04:19:56.192082  502762 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 04:19:56.196243  502762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:19:56.209101  502762 kubeadm.go:884] updating cluster {Name:auto-778509 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-778509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 04:19:56.209233  502762 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:19:56.209297  502762 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:19:56.261061  502762 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:19:56.261087  502762 crio.go:433] Images already preloaded, skipping extraction
	I1124 04:19:56.261145  502762 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:19:56.298574  502762 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:19:56.298601  502762 cache_images.go:86] Images are preloaded, skipping loading
	I1124 04:19:56.298610  502762 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1124 04:19:56.298744  502762 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-778509 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-778509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 04:19:56.298848  502762 ssh_runner.go:195] Run: crio config
	I1124 04:19:56.401606  502762 cni.go:84] Creating CNI manager for ""
	I1124 04:19:56.401678  502762 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:19:56.401713  502762 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 04:19:56.401760  502762 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-778509 NodeName:auto-778509 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 04:19:56.401940  502762 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-778509"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 04:19:56.402044  502762 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 04:19:56.414553  502762 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 04:19:56.414687  502762 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 04:19:56.423671  502762 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1124 04:19:56.448783  502762 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 04:19:56.469149  502762 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1124 04:19:56.497133  502762 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 04:19:56.501400  502762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:19:56.522665  502762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:19:56.721447  502762 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:19:56.751689  502762 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509 for IP: 192.168.76.2
	I1124 04:19:56.751760  502762 certs.go:195] generating shared ca certs ...
	I1124 04:19:56.751791  502762 certs.go:227] acquiring lock for ca certs: {Name:mk13d1a72b5c7901cf99d88e558f62c6b8512807 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:56.751968  502762 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key
	I1124 04:19:56.752054  502762 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key
	I1124 04:19:56.752097  502762 certs.go:257] generating profile certs ...
	I1124 04:19:56.752189  502762 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/client.key
	I1124 04:19:56.752224  502762 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/client.crt with IP's: []
	I1124 04:19:57.247429  502762 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/client.crt ...
	I1124 04:19:57.247462  502762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/client.crt: {Name:mk7f604724bf42f096e7e40c20f10467d20ef986 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:57.247699  502762 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/client.key ...
	I1124 04:19:57.247716  502762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/client.key: {Name:mkc19ae019700138310310707f7a53514ede31fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:57.247860  502762 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/apiserver.key.5341c204
	I1124 04:19:57.247883  502762 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/apiserver.crt.5341c204 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 04:19:57.616421  502762 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/apiserver.crt.5341c204 ...
	I1124 04:19:57.616454  502762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/apiserver.crt.5341c204: {Name:mk527bb9f2a3625f56a62720a6e0c86127eeb952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:57.616669  502762 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/apiserver.key.5341c204 ...
	I1124 04:19:57.616691  502762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/apiserver.key.5341c204: {Name:mkdc480e26e16274f15be0799babe88db18343fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:57.616831  502762 certs.go:382] copying /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/apiserver.crt.5341c204 -> /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/apiserver.crt
	I1124 04:19:57.616950  502762 certs.go:386] copying /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/apiserver.key.5341c204 -> /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/apiserver.key
	I1124 04:19:57.617037  502762 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/proxy-client.key
	I1124 04:19:57.617071  502762 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/proxy-client.crt with IP's: []
	I1124 04:19:59.021044  502762 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/proxy-client.crt ...
	I1124 04:19:59.021071  502762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/proxy-client.crt: {Name:mk1a6098decd76def5555c23226cc66fa41fc11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:59.021225  502762 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/proxy-client.key ...
	I1124 04:19:59.021233  502762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/proxy-client.key: {Name:mk551d9401b2ee0595b5e7123fe4053b13b4b7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:59.021400  502762 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem (1338 bytes)
	W1124 04:19:59.021439  502762 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389_empty.pem, impossibly tiny 0 bytes
	I1124 04:19:59.021447  502762 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 04:19:59.021474  502762 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem (1082 bytes)
	I1124 04:19:59.021501  502762 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem (1123 bytes)
	I1124 04:19:59.021526  502762 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem (1675 bytes)
	I1124 04:19:59.021571  502762 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:19:59.022132  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 04:19:59.045619  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 04:19:59.071174  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 04:19:59.100826  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 04:19:59.121669  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1124 04:19:59.165965  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 04:19:59.205118  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 04:19:59.252447  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 04:19:59.273789  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 04:19:59.293675  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem --> /usr/share/ca-certificates/291389.pem (1338 bytes)
	I1124 04:19:59.312858  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /usr/share/ca-certificates/2913892.pem (1708 bytes)
	I1124 04:19:59.332179  502762 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 04:19:59.346409  502762 ssh_runner.go:195] Run: openssl version
	I1124 04:19:59.353209  502762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 04:19:59.362292  502762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:19:59.366356  502762 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 03:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:19:59.366416  502762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:19:59.409012  502762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 04:19:59.418088  502762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291389.pem && ln -fs /usr/share/ca-certificates/291389.pem /etc/ssl/certs/291389.pem"
	I1124 04:19:59.427069  502762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291389.pem
	I1124 04:19:59.431417  502762 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 03:20 /usr/share/ca-certificates/291389.pem
	I1124 04:19:59.431531  502762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291389.pem
	I1124 04:19:59.473344  502762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291389.pem /etc/ssl/certs/51391683.0"
	I1124 04:19:59.482504  502762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2913892.pem && ln -fs /usr/share/ca-certificates/2913892.pem /etc/ssl/certs/2913892.pem"
	I1124 04:19:59.491763  502762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2913892.pem
	I1124 04:19:59.496190  502762 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 03:20 /usr/share/ca-certificates/2913892.pem
	I1124 04:19:59.496309  502762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2913892.pem
	I1124 04:19:59.538182  502762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2913892.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 04:19:59.547469  502762 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 04:19:59.552073  502762 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 04:19:59.552173  502762 kubeadm.go:401] StartCluster: {Name:auto-778509 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-778509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:19:59.552300  502762 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 04:19:59.552391  502762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 04:19:59.592114  502762 cri.go:89] found id: ""
	I1124 04:19:59.592294  502762 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 04:19:59.603889  502762 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 04:19:59.616566  502762 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 04:19:59.616722  502762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 04:19:59.628667  502762 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 04:19:59.628746  502762 kubeadm.go:158] found existing configuration files:
	
	I1124 04:19:59.628831  502762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 04:19:59.640858  502762 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 04:19:59.640967  502762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 04:19:59.649988  502762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 04:19:59.659579  502762 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 04:19:59.659723  502762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 04:19:59.668143  502762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 04:19:59.677347  502762 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 04:19:59.677477  502762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 04:19:59.687580  502762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 04:19:59.699341  502762 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 04:19:59.699470  502762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 04:19:59.710770  502762 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 04:19:59.771544  502762 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 04:19:59.772197  502762 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 04:19:59.800957  502762 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 04:19:59.801070  502762 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 04:19:59.801135  502762 kubeadm.go:319] OS: Linux
	I1124 04:19:59.801214  502762 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 04:19:59.801283  502762 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 04:19:59.801354  502762 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 04:19:59.801423  502762 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 04:19:59.801503  502762 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 04:19:59.801573  502762 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 04:19:59.801652  502762 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 04:19:59.801718  502762 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 04:19:59.801782  502762 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 04:19:59.926785  502762 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 04:19:59.926938  502762 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 04:19:59.927057  502762 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 04:19:59.954871  502762 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1124 04:19:55.759927  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	W1124 04:19:57.761659  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	W1124 04:19:59.764066  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	I1124 04:19:59.962585  502762 out.go:252]   - Generating certificates and keys ...
	I1124 04:19:59.962724  502762 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 04:19:59.962816  502762 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 04:20:00.302564  502762 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 04:20:00.928473  502762 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 04:20:01.089545  502762 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 04:20:01.594378  502762 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	W1124 04:20:01.780480  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	W1124 04:20:04.262358  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	I1124 04:20:03.650805  502762 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 04:20:03.650950  502762 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-778509 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 04:20:04.844634  502762 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 04:20:04.844777  502762 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-778509 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 04:20:05.556431  502762 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 04:20:06.435154  502762 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 04:20:06.820544  502762 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 04:20:06.821124  502762 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 04:20:06.983752  502762 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 04:20:07.671314  502762 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 04:20:08.794139  502762 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 04:20:09.578872  502762 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 04:20:09.975638  502762 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 04:20:09.976332  502762 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 04:20:09.978834  502762 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1124 04:20:06.761462  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	W1124 04:20:09.261839  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	I1124 04:20:09.982152  502762 out.go:252]   - Booting up control plane ...
	I1124 04:20:09.982266  502762 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 04:20:09.982345  502762 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 04:20:09.984280  502762 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 04:20:10.013293  502762 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 04:20:10.013407  502762 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 04:20:10.017662  502762 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 04:20:10.020282  502762 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 04:20:10.020344  502762 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 04:20:10.166949  502762 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 04:20:10.167070  502762 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 04:20:11.169806  502762 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001813204s
	I1124 04:20:11.171972  502762 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 04:20:11.172327  502762 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1124 04:20:11.172645  502762 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 04:20:11.173469  502762 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1124 04:20:11.760639  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	W1124 04:20:14.260151  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	I1124 04:20:13.694654  502762 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.520372444s
	I1124 04:20:14.953882  502762 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.779333017s
	I1124 04:20:16.675325  502762 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.502269303s
	I1124 04:20:16.696242  502762 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 04:20:16.712222  502762 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 04:20:16.730801  502762 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 04:20:16.731019  502762 kubeadm.go:319] [mark-control-plane] Marking the node auto-778509 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 04:20:16.744790  502762 kubeadm.go:319] [bootstrap-token] Using token: 81yeiu.qp3nz3md4mckox5j
	I1124 04:20:16.747861  502762 out.go:252]   - Configuring RBAC rules ...
	I1124 04:20:16.748011  502762 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 04:20:16.753293  502762 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 04:20:16.767878  502762 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 04:20:16.775459  502762 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 04:20:16.780101  502762 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 04:20:16.784711  502762 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 04:20:17.087314  502762 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 04:20:17.555386  502762 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 04:20:18.089788  502762 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 04:20:18.090211  502762 kubeadm.go:319] 
	I1124 04:20:18.090294  502762 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 04:20:18.090305  502762 kubeadm.go:319] 
	I1124 04:20:18.090390  502762 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 04:20:18.090400  502762 kubeadm.go:319] 
	I1124 04:20:18.090425  502762 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 04:20:18.090538  502762 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 04:20:18.090597  502762 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 04:20:18.090606  502762 kubeadm.go:319] 
	I1124 04:20:18.090660  502762 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 04:20:18.090669  502762 kubeadm.go:319] 
	I1124 04:20:18.090717  502762 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 04:20:18.090723  502762 kubeadm.go:319] 
	I1124 04:20:18.090775  502762 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 04:20:18.090855  502762 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 04:20:18.090926  502762 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 04:20:18.090935  502762 kubeadm.go:319] 
	I1124 04:20:18.091036  502762 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 04:20:18.091126  502762 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 04:20:18.091138  502762 kubeadm.go:319] 
	I1124 04:20:18.091223  502762 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 81yeiu.qp3nz3md4mckox5j \
	I1124 04:20:18.091333  502762 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2194168f8292dc0469a2699229b86460c4906ab7e633f8753935c94ddff37855 \
	I1124 04:20:18.091358  502762 kubeadm.go:319] 	--control-plane 
	I1124 04:20:18.091367  502762 kubeadm.go:319] 
	I1124 04:20:18.091451  502762 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 04:20:18.091460  502762 kubeadm.go:319] 
	I1124 04:20:18.091543  502762 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 81yeiu.qp3nz3md4mckox5j \
	I1124 04:20:18.091649  502762 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2194168f8292dc0469a2699229b86460c4906ab7e633f8753935c94ddff37855 
	I1124 04:20:18.095449  502762 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 04:20:18.095676  502762 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 04:20:18.095792  502762 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 04:20:18.095815  502762 cni.go:84] Creating CNI manager for ""
	I1124 04:20:18.095823  502762 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:20:18.099085  502762 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1124 04:20:16.260331  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	W1124 04:20:18.261693  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	I1124 04:20:18.102070  502762 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 04:20:18.106654  502762 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 04:20:18.106683  502762 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 04:20:18.122291  502762 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 04:20:18.515789  502762 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 04:20:18.515930  502762 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:20:18.516007  502762 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-778509 minikube.k8s.io/updated_at=2025_11_24T04_20_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=auto-778509 minikube.k8s.io/primary=true
	I1124 04:20:18.749573  502762 ops.go:34] apiserver oom_adj: -16
	I1124 04:20:18.749683  502762 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:20:19.249730  502762 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:20:19.750591  502762 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:20:20.249772  502762 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:20:20.750031  502762 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:20:21.249787  502762 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:20:21.749778  502762 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:20:22.249943  502762 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:20:22.750659  502762 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:20:22.870719  502762 kubeadm.go:1114] duration metric: took 4.354833647s to wait for elevateKubeSystemPrivileges
	I1124 04:20:22.870748  502762 kubeadm.go:403] duration metric: took 23.318581272s to StartCluster
	I1124 04:20:22.870765  502762 settings.go:142] acquiring lock: {Name:mkbb4fe81234aed150705ba63a85f83232f71688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:20:22.870828  502762 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:20:22.871900  502762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/kubeconfig: {Name:mkb927d54c1c5489b1562496c31c4a11f46f8c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:20:22.872124  502762 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 04:20:22.872289  502762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 04:20:22.872565  502762 config.go:182] Loaded profile config "auto-778509": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:20:22.872599  502762 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 04:20:22.872658  502762 addons.go:70] Setting storage-provisioner=true in profile "auto-778509"
	I1124 04:20:22.872673  502762 addons.go:239] Setting addon storage-provisioner=true in "auto-778509"
	I1124 04:20:22.872694  502762 host.go:66] Checking if "auto-778509" exists ...
	I1124 04:20:22.873226  502762 cli_runner.go:164] Run: docker container inspect auto-778509 --format={{.State.Status}}
	I1124 04:20:22.873823  502762 addons.go:70] Setting default-storageclass=true in profile "auto-778509"
	I1124 04:20:22.873847  502762 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-778509"
	I1124 04:20:22.874137  502762 cli_runner.go:164] Run: docker container inspect auto-778509 --format={{.State.Status}}
	I1124 04:20:22.875908  502762 out.go:179] * Verifying Kubernetes components...
	I1124 04:20:22.879168  502762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:20:22.931664  502762 addons.go:239] Setting addon default-storageclass=true in "auto-778509"
	I1124 04:20:22.931703  502762 host.go:66] Checking if "auto-778509" exists ...
	I1124 04:20:22.932167  502762 cli_runner.go:164] Run: docker container inspect auto-778509 --format={{.State.Status}}
	I1124 04:20:22.936599  502762 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 04:20:22.939497  502762 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:20:22.939522  502762 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 04:20:22.939591  502762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-778509
	I1124 04:20:22.971298  502762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/auto-778509/id_rsa Username:docker}
	I1124 04:20:22.978679  502762 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 04:20:22.978700  502762 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 04:20:22.978758  502762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-778509
	I1124 04:20:23.004493  502762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/auto-778509/id_rsa Username:docker}
	I1124 04:20:23.347224  502762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:20:23.373992  502762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 04:20:23.374154  502762 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:20:23.485742  502762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 04:20:24.251324  502762 node_ready.go:35] waiting up to 15m0s for node "auto-778509" to be "Ready" ...
	I1124 04:20:24.250368  502762 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1124 04:20:24.303673  502762 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1124 04:20:20.760224  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	W1124 04:20:22.760355  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	W1124 04:20:24.761632  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	I1124 04:20:24.307561  502762 addons.go:530] duration metric: took 1.434948386s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 04:20:24.757775  502762 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-778509" context rescaled to 1 replicas
	W1124 04:20:26.254915  502762 node_ready.go:57] node "auto-778509" has "Ready":"False" status (will retry)
	W1124 04:20:27.260440  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	W1124 04:20:29.759077  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	I1124 04:20:30.260647  500996 pod_ready.go:94] pod "coredns-66bc5c9577-jtn7v" is "Ready"
	I1124 04:20:30.260679  500996 pod_ready.go:86] duration metric: took 36.506710943s for pod "coredns-66bc5c9577-jtn7v" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:20:30.263638  500996 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-303179" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:20:30.269443  500996 pod_ready.go:94] pod "etcd-default-k8s-diff-port-303179" is "Ready"
	I1124 04:20:30.269477  500996 pod_ready.go:86] duration metric: took 5.80752ms for pod "etcd-default-k8s-diff-port-303179" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:20:30.272217  500996 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-303179" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:20:30.277309  500996 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-303179" is "Ready"
	I1124 04:20:30.277336  500996 pod_ready.go:86] duration metric: took 5.090834ms for pod "kube-apiserver-default-k8s-diff-port-303179" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:20:30.284180  500996 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-303179" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:20:30.457131  500996 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-303179" is "Ready"
	I1124 04:20:30.457163  500996 pod_ready.go:86] duration metric: took 172.953995ms for pod "kube-controller-manager-default-k8s-diff-port-303179" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:20:30.657378  500996 pod_ready.go:83] waiting for pod "kube-proxy-dxbvb" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 04:20:28.255071  502762 node_ready.go:57] node "auto-778509" has "Ready":"False" status (will retry)
	W1124 04:20:30.256521  502762 node_ready.go:57] node "auto-778509" has "Ready":"False" status (will retry)
	I1124 04:20:31.057977  500996 pod_ready.go:94] pod "kube-proxy-dxbvb" is "Ready"
	I1124 04:20:31.058008  500996 pod_ready.go:86] duration metric: took 400.601728ms for pod "kube-proxy-dxbvb" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:20:31.257938  500996 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-303179" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:20:31.657599  500996 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-303179" is "Ready"
	I1124 04:20:31.657628  500996 pod_ready.go:86] duration metric: took 399.660628ms for pod "kube-scheduler-default-k8s-diff-port-303179" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:20:31.657640  500996 pod_ready.go:40] duration metric: took 37.909245818s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 04:20:31.716927  500996 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 04:20:31.719974  500996 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-303179" cluster and "default" namespace by default
	W1124 04:20:32.754699  502762 node_ready.go:57] node "auto-778509" has "Ready":"False" status (will retry)
	W1124 04:20:35.254390  502762 node_ready.go:57] node "auto-778509" has "Ready":"False" status (will retry)
	W1124 04:20:37.254838  502762 node_ready.go:57] node "auto-778509" has "Ready":"False" status (will retry)
	W1124 04:20:39.754997  502762 node_ready.go:57] node "auto-778509" has "Ready":"False" status (will retry)
	W1124 04:20:42.256249  502762 node_ready.go:57] node "auto-778509" has "Ready":"False" status (will retry)
	W1124 04:20:44.262853  502762 node_ready.go:57] node "auto-778509" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 24 04:20:23 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:23.445081105Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=43580074-e39a-438a-b48e-4292a44bcbf9 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:20:23 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:23.447002753Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=07a56214-1d3b-46ae-9cab-a547f3d97c7a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:20:23 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:23.447264549Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:20:23 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:23.463299604Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:20:23 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:23.465964694Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b839e0c3255dcb75c8c02ad5d11329c0e3fdf48ac24d2435c86966b58ea48f89/merged/etc/passwd: no such file or directory"
	Nov 24 04:20:23 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:23.466231069Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b839e0c3255dcb75c8c02ad5d11329c0e3fdf48ac24d2435c86966b58ea48f89/merged/etc/group: no such file or directory"
	Nov 24 04:20:23 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:23.469226099Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:20:23 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:23.498809615Z" level=info msg="Created container 1382fa66d6c4e982dc6b4a55d8edbbf187a34c265672f91b49f816a794745593: kube-system/storage-provisioner/storage-provisioner" id=07a56214-1d3b-46ae-9cab-a547f3d97c7a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:20:23 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:23.503666625Z" level=info msg="Starting container: 1382fa66d6c4e982dc6b4a55d8edbbf187a34c265672f91b49f816a794745593" id=7db78333-922c-46f0-9dfc-8aea29ddd4d1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:20:23 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:23.510065503Z" level=info msg="Started container" PID=1684 containerID=1382fa66d6c4e982dc6b4a55d8edbbf187a34c265672f91b49f816a794745593 description=kube-system/storage-provisioner/storage-provisioner id=7db78333-922c-46f0-9dfc-8aea29ddd4d1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2aec815fd912f6fb0f5f5c102ce3b4d6e6e7ad80053e12d9286f5454291f238d
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.845309143Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.850734307Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.850769475Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.850797618Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.85533167Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.855367872Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.855391347Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.859768335Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.859806046Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.859832705Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.864256897Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.864290259Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.86431828Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.868470042Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.868506777Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	1382fa66d6c4e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   2aec815fd912f       storage-provisioner                                    kube-system
	71c2ac2a77db9       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           29 seconds ago       Exited              dashboard-metrics-scraper   2                   aaedd6482295c       dashboard-metrics-scraper-6ffb444bf9-pjsgd             kubernetes-dashboard
	e6e6068e06b19       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   38 seconds ago       Running             kubernetes-dashboard        0                   d8e8760eab9df       kubernetes-dashboard-855c9754f9-kxt5z                  kubernetes-dashboard
	94e4284acaaa6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           54 seconds ago       Running             coredns                     1                   17f79d28b9d8c       coredns-66bc5c9577-jtn7v                               kube-system
	02dae000866f7       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   bffa5e056a655       busybox                                                default
	8ea36127f1e2a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   0d11be28b3cd4       kube-proxy-dxbvb                                       kube-system
	77638956b16e6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   7cf33f37c1ea0       kindnet-wpp6p                                          kube-system
	691c670515b36       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   2aec815fd912f       storage-provisioner                                    kube-system
	e9dbfcfcc198e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   61108daff7cbf       kube-controller-manager-default-k8s-diff-port-303179   kube-system
	99ae33342ea98       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   500b20d5f5672       etcd-default-k8s-diff-port-303179                      kube-system
	5d9db75c10b00       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   ea00382c9be21       kube-scheduler-default-k8s-diff-port-303179            kube-system
	7b5099e4fd3c1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   c1f8f86e883fb       kube-apiserver-default-k8s-diff-port-303179            kube-system
	
	
	==> coredns [94e4284acaaa64a390d75968180bd4df33aa7cd9f0ad954d942f3d86db1a8dc9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59472 - 33126 "HINFO IN 1902876730966256139.252883644883131976. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.039823838s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-303179
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-303179
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=default-k8s-diff-port-303179
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T04_18_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 04:18:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-303179
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 04:20:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 04:20:22 +0000   Mon, 24 Nov 2025 04:18:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 04:20:22 +0000   Mon, 24 Nov 2025 04:18:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 04:20:22 +0000   Mon, 24 Nov 2025 04:18:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 04:20:22 +0000   Mon, 24 Nov 2025 04:19:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-303179
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 304a86241bf1bbb85bd31db5692386d7
	  System UUID:                0604e81b-b009-43d1-b54f-04b6a69cede9
	  Boot ID:                    e6ca431c-3a35-478f-87f6-f49cc4bc8a65
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-jtn7v                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m20s
	  kube-system                 etcd-default-k8s-diff-port-303179                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m27s
	  kube-system                 kindnet-wpp6p                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m21s
	  kube-system                 kube-apiserver-default-k8s-diff-port-303179             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-303179    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-proxy-dxbvb                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-scheduler-default-k8s-diff-port-303179             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-pjsgd              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-kxt5z                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m19s              kube-proxy       
	  Normal   Starting                 54s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m26s              kubelet          Node default-k8s-diff-port-303179 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m26s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m26s              kubelet          Node default-k8s-diff-port-303179 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m26s              kubelet          Node default-k8s-diff-port-303179 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m26s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m22s              node-controller  Node default-k8s-diff-port-303179 event: Registered Node default-k8s-diff-port-303179 in Controller
	  Normal   NodeReady                99s                kubelet          Node default-k8s-diff-port-303179 status is now: NodeReady
	  Normal   Starting                 64s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 63s)  kubelet          Node default-k8s-diff-port-303179 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 63s)  kubelet          Node default-k8s-diff-port-303179 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 63s)  kubelet          Node default-k8s-diff-port-303179 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                node-controller  Node default-k8s-diff-port-303179 event: Registered Node default-k8s-diff-port-303179 in Controller
	
	
	==> dmesg <==
	[Nov24 03:58] overlayfs: idmapped layers are currently not supported
	[ +17.825126] overlayfs: idmapped layers are currently not supported
	[Nov24 03:59] overlayfs: idmapped layers are currently not supported
	[ +28.164324] overlayfs: idmapped layers are currently not supported
	[Nov24 04:01] overlayfs: idmapped layers are currently not supported
	[Nov24 04:02] overlayfs: idmapped layers are currently not supported
	[Nov24 04:04] overlayfs: idmapped layers are currently not supported
	[Nov24 04:05] overlayfs: idmapped layers are currently not supported
	[Nov24 04:06] overlayfs: idmapped layers are currently not supported
	[Nov24 04:08] overlayfs: idmapped layers are currently not supported
	[Nov24 04:10] overlayfs: idmapped layers are currently not supported
	[Nov24 04:11] overlayfs: idmapped layers are currently not supported
	[ +23.918932] overlayfs: idmapped layers are currently not supported
	[Nov24 04:12] overlayfs: idmapped layers are currently not supported
	[ +35.202347] overlayfs: idmapped layers are currently not supported
	[Nov24 04:13] overlayfs: idmapped layers are currently not supported
	[Nov24 04:15] overlayfs: idmapped layers are currently not supported
	[ +47.476343] overlayfs: idmapped layers are currently not supported
	[Nov24 04:16] overlayfs: idmapped layers are currently not supported
	[Nov24 04:17] overlayfs: idmapped layers are currently not supported
	[Nov24 04:18] overlayfs: idmapped layers are currently not supported
	[ +43.060353] overlayfs: idmapped layers are currently not supported
	[Nov24 04:19] overlayfs: idmapped layers are currently not supported
	[ +19.472739] overlayfs: idmapped layers are currently not supported
	[Nov24 04:20] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [99ae33342ea980542a5d01e94e4e877f3a9a7f61e7804bf44fc417104b2c8f75] <==
	{"level":"warn","ts":"2025-11-24T04:19:50.002267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.017629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.040916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.067384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.077608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.088852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.106560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.122294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.137717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.161012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.183974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.197523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.219208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.236051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.255613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.272740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.285437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.301403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.316090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.331510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.350790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.373269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.390994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.404788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.474529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37240","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 04:20:47 up  3:02,  0 user,  load average: 4.56, 3.79, 3.09
	Linux default-k8s-diff-port-303179 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [77638956b16e6865b98ae01afe0403153860b4c41ae3b6f7f1ca46f8dbd2a939] <==
	I1124 04:19:52.640848       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 04:19:52.718874       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 04:19:52.719061       1 main.go:148] setting mtu 1500 for CNI 
	I1124 04:19:52.719101       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 04:19:52.719140       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T04:19:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 04:19:52.845628       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 04:19:52.845729       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 04:19:52.845764       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 04:19:52.846217       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 04:20:22.844346       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 04:20:22.845669       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 04:20:22.846976       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 04:20:22.915353       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1124 04:20:24.146745       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 04:20:24.146781       1 metrics.go:72] Registering metrics
	I1124 04:20:24.146848       1 controller.go:711] "Syncing nftables rules"
	I1124 04:20:32.844992       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 04:20:32.845040       1 main.go:301] handling current node
	I1124 04:20:42.850811       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 04:20:42.850845       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7b5099e4fd3c18fc391ec751d92268e0f783642d7729eae47a7899934d2bf05a] <==
	I1124 04:19:51.694822       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 04:19:51.694941       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 04:19:51.704751       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 04:19:51.706787       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 04:19:51.706971       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 04:19:51.707005       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 04:19:51.707071       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 04:19:51.707718       1 aggregator.go:171] initial CRD sync complete...
	I1124 04:19:51.712603       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 04:19:51.712679       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 04:19:51.712710       1 cache.go:39] Caches are synced for autoregister controller
	I1124 04:19:51.713698       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 04:19:51.754181       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1124 04:19:51.784232       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 04:19:52.096824       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 04:19:52.219085       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 04:19:52.550233       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 04:19:52.718083       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 04:19:52.844846       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 04:19:52.866250       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 04:19:53.008145       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.114.160"}
	I1124 04:19:53.029558       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.184.179"}
	I1124 04:19:55.389749       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 04:19:55.439424       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 04:19:55.493967       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [e9dbfcfcc198e21983dc97bf121184dd3db9248de5fd970ee04f8ed5f32a25ed] <==
	I1124 04:19:54.970890       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 04:19:54.981178       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 04:19:54.982435       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 04:19:54.982578       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 04:19:54.982604       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 04:19:54.982893       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 04:19:54.983054       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 04:19:54.983078       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 04:19:54.983182       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 04:19:54.983219       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 04:19:54.983248       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 04:19:54.983453       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 04:19:54.989373       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 04:19:54.996806       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 04:19:54.997019       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 04:19:55.012960       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 04:19:55.032685       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 04:19:55.032913       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 04:19:55.033029       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-303179"
	I1124 04:19:55.033102       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 04:19:55.032805       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 04:19:55.034562       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 04:19:55.037182       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 04:19:55.038588       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 04:19:55.045766       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [8ea36127f1e2a736a6fa13fdcd1a92bbbae15e2705dd51f2675d3440113d8abb] <==
	I1124 04:19:52.807285       1 server_linux.go:53] "Using iptables proxy"
	I1124 04:19:52.948943       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 04:19:53.050073       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 04:19:53.064936       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 04:19:53.065035       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 04:19:53.203615       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 04:19:53.203749       1 server_linux.go:132] "Using iptables Proxier"
	I1124 04:19:53.219337       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 04:19:53.219732       1 server.go:527] "Version info" version="v1.34.1"
	I1124 04:19:53.219927       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:19:53.221187       1 config.go:200] "Starting service config controller"
	I1124 04:19:53.221247       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 04:19:53.221286       1 config.go:106] "Starting endpoint slice config controller"
	I1124 04:19:53.221312       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 04:19:53.221348       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 04:19:53.221375       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 04:19:53.222681       1 config.go:309] "Starting node config controller"
	I1124 04:19:53.223335       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 04:19:53.223393       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 04:19:53.324149       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 04:19:53.324186       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 04:19:53.324227       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5d9db75c10b0014f3fe772d0746170c1ac112901a7f81fceee9ad108d08be4d4] <==
	I1124 04:19:48.785375       1 serving.go:386] Generated self-signed cert in-memory
	W1124 04:19:51.263490       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 04:19:51.263618       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 04:19:51.263654       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 04:19:51.263702       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 04:19:51.557059       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 04:19:51.570827       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:19:51.589123       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 04:19:51.589372       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 04:19:51.589397       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 04:19:51.589561       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 04:19:51.789807       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 04:19:55 default-k8s-diff-port-303179 kubelet[795]: I1124 04:19:55.804609     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwlh4\" (UniqueName: \"kubernetes.io/projected/f27d2dc7-02aa-4c7f-ad0d-2780a4cbead8-kube-api-access-bwlh4\") pod \"kubernetes-dashboard-855c9754f9-kxt5z\" (UID: \"f27d2dc7-02aa-4c7f-ad0d-2780a4cbead8\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kxt5z"
	Nov 24 04:19:55 default-k8s-diff-port-303179 kubelet[795]: W1124 04:19:55.968721     795 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c6af048d3f8eed722eea7223c0aae47b7c545b8922e6d139f9580c8eda525748/crio-aaedd6482295c07e2f424254ad1aba39e814cf703ff571493c23bbe504f0cccc WatchSource:0}: Error finding container aaedd6482295c07e2f424254ad1aba39e814cf703ff571493c23bbe504f0cccc: Status 404 returned error can't find the container with id aaedd6482295c07e2f424254ad1aba39e814cf703ff571493c23bbe504f0cccc
	Nov 24 04:19:56 default-k8s-diff-port-303179 kubelet[795]: W1124 04:19:56.002300     795 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c6af048d3f8eed722eea7223c0aae47b7c545b8922e6d139f9580c8eda525748/crio-d8e8760eab9dff2ed38768db4d1595e78b3b65123e0aeae4cd4ed0354afa3376 WatchSource:0}: Error finding container d8e8760eab9dff2ed38768db4d1595e78b3b65123e0aeae4cd4ed0354afa3376: Status 404 returned error can't find the container with id d8e8760eab9dff2ed38768db4d1595e78b3b65123e0aeae4cd4ed0354afa3376
	Nov 24 04:20:00 default-k8s-diff-port-303179 kubelet[795]: I1124 04:20:00.004372     795 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 24 04:20:02 default-k8s-diff-port-303179 kubelet[795]: I1124 04:20:02.369792     795 scope.go:117] "RemoveContainer" containerID="5c338d457c4d322764fcae234383de9faa199487d56e0765c922c16fcbbc7240"
	Nov 24 04:20:03 default-k8s-diff-port-303179 kubelet[795]: I1124 04:20:03.375754     795 scope.go:117] "RemoveContainer" containerID="5c338d457c4d322764fcae234383de9faa199487d56e0765c922c16fcbbc7240"
	Nov 24 04:20:03 default-k8s-diff-port-303179 kubelet[795]: I1124 04:20:03.376050     795 scope.go:117] "RemoveContainer" containerID="9010f62448852361cf7013ab7f56db36f153af69befb8b8430e0af6aea19cdee"
	Nov 24 04:20:03 default-k8s-diff-port-303179 kubelet[795]: E1124 04:20:03.376201     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pjsgd_kubernetes-dashboard(4e184ed9-95b6-40f3-a516-b9ab36a8e5f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pjsgd" podUID="4e184ed9-95b6-40f3-a516-b9ab36a8e5f5"
	Nov 24 04:20:04 default-k8s-diff-port-303179 kubelet[795]: I1124 04:20:04.381088     795 scope.go:117] "RemoveContainer" containerID="9010f62448852361cf7013ab7f56db36f153af69befb8b8430e0af6aea19cdee"
	Nov 24 04:20:04 default-k8s-diff-port-303179 kubelet[795]: E1124 04:20:04.381862     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pjsgd_kubernetes-dashboard(4e184ed9-95b6-40f3-a516-b9ab36a8e5f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pjsgd" podUID="4e184ed9-95b6-40f3-a516-b9ab36a8e5f5"
	Nov 24 04:20:05 default-k8s-diff-port-303179 kubelet[795]: I1124 04:20:05.925503     795 scope.go:117] "RemoveContainer" containerID="9010f62448852361cf7013ab7f56db36f153af69befb8b8430e0af6aea19cdee"
	Nov 24 04:20:05 default-k8s-diff-port-303179 kubelet[795]: E1124 04:20:05.925705     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pjsgd_kubernetes-dashboard(4e184ed9-95b6-40f3-a516-b9ab36a8e5f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pjsgd" podUID="4e184ed9-95b6-40f3-a516-b9ab36a8e5f5"
	Nov 24 04:20:18 default-k8s-diff-port-303179 kubelet[795]: I1124 04:20:18.154879     795 scope.go:117] "RemoveContainer" containerID="9010f62448852361cf7013ab7f56db36f153af69befb8b8430e0af6aea19cdee"
	Nov 24 04:20:18 default-k8s-diff-port-303179 kubelet[795]: I1124 04:20:18.426077     795 scope.go:117] "RemoveContainer" containerID="9010f62448852361cf7013ab7f56db36f153af69befb8b8430e0af6aea19cdee"
	Nov 24 04:20:18 default-k8s-diff-port-303179 kubelet[795]: I1124 04:20:18.426636     795 scope.go:117] "RemoveContainer" containerID="71c2ac2a77db96c635e8a7e09623f36fea99c4831f268be3cc0d4dd5cdcaa5d4"
	Nov 24 04:20:18 default-k8s-diff-port-303179 kubelet[795]: E1124 04:20:18.427218     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pjsgd_kubernetes-dashboard(4e184ed9-95b6-40f3-a516-b9ab36a8e5f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pjsgd" podUID="4e184ed9-95b6-40f3-a516-b9ab36a8e5f5"
	Nov 24 04:20:18 default-k8s-diff-port-303179 kubelet[795]: I1124 04:20:18.472046     795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kxt5z" podStartSLOduration=10.658353632 podStartE2EDuration="23.472024344s" podCreationTimestamp="2025-11-24 04:19:55 +0000 UTC" firstStartedPulling="2025-11-24 04:19:56.010347835 +0000 UTC m=+12.214416731" lastFinishedPulling="2025-11-24 04:20:08.824018547 +0000 UTC m=+25.028087443" observedRunningTime="2025-11-24 04:20:09.432057745 +0000 UTC m=+25.636126641" watchObservedRunningTime="2025-11-24 04:20:18.472024344 +0000 UTC m=+34.676093248"
	Nov 24 04:20:23 default-k8s-diff-port-303179 kubelet[795]: I1124 04:20:23.441449     795 scope.go:117] "RemoveContainer" containerID="691c670515b3649d1af8f8fc34e9afc633ea3c1168b0515b53808ecd55c01c47"
	Nov 24 04:20:25 default-k8s-diff-port-303179 kubelet[795]: I1124 04:20:25.926007     795 scope.go:117] "RemoveContainer" containerID="71c2ac2a77db96c635e8a7e09623f36fea99c4831f268be3cc0d4dd5cdcaa5d4"
	Nov 24 04:20:25 default-k8s-diff-port-303179 kubelet[795]: E1124 04:20:25.926234     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pjsgd_kubernetes-dashboard(4e184ed9-95b6-40f3-a516-b9ab36a8e5f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pjsgd" podUID="4e184ed9-95b6-40f3-a516-b9ab36a8e5f5"
	Nov 24 04:20:38 default-k8s-diff-port-303179 kubelet[795]: I1124 04:20:38.155192     795 scope.go:117] "RemoveContainer" containerID="71c2ac2a77db96c635e8a7e09623f36fea99c4831f268be3cc0d4dd5cdcaa5d4"
	Nov 24 04:20:38 default-k8s-diff-port-303179 kubelet[795]: E1124 04:20:38.155387     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pjsgd_kubernetes-dashboard(4e184ed9-95b6-40f3-a516-b9ab36a8e5f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pjsgd" podUID="4e184ed9-95b6-40f3-a516-b9ab36a8e5f5"
	Nov 24 04:20:45 default-k8s-diff-port-303179 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 04:20:45 default-k8s-diff-port-303179 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 04:20:45 default-k8s-diff-port-303179 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [e6e6068e06b191282d1459f1f15294aab079f0847575d0a311c401d7cff667c8] <==
	2025/11/24 04:20:08 Starting overwatch
	2025/11/24 04:20:08 Using namespace: kubernetes-dashboard
	2025/11/24 04:20:08 Using in-cluster config to connect to apiserver
	2025/11/24 04:20:08 Using secret token for csrf signing
	2025/11/24 04:20:08 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 04:20:08 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 04:20:08 Successful initial request to the apiserver, version: v1.34.1
	2025/11/24 04:20:08 Generating JWE encryption key
	2025/11/24 04:20:08 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 04:20:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 04:20:09 Initializing JWE encryption key from synchronized object
	2025/11/24 04:20:09 Creating in-cluster Sidecar client
	2025/11/24 04:20:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 04:20:09 Serving insecurely on HTTP port: 9090
	2025/11/24 04:20:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [1382fa66d6c4e982dc6b4a55d8edbbf187a34c265672f91b49f816a794745593] <==
	I1124 04:20:23.540437       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 04:20:23.557226       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 04:20:23.560199       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 04:20:23.564892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:20:27.020777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:20:31.280819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:20:34.879888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:20:37.933301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:20:40.956281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:20:40.964588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 04:20:40.964783       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 04:20:40.964990       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-303179_89b97a19-a91c-429c-9576-765a8ccc9830!
	I1124 04:20:40.965790       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"09f66fd8-db14-4a17-8771-4d111bed13aa", APIVersion:"v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-303179_89b97a19-a91c-429c-9576-765a8ccc9830 became leader
	W1124 04:20:40.969590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:20:40.975186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 04:20:41.065932       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-303179_89b97a19-a91c-429c-9576-765a8ccc9830!
	W1124 04:20:42.978608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:20:42.989015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:20:44.992796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:20:44.998959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:20:47.002662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:20:47.009667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [691c670515b3649d1af8f8fc34e9afc633ea3c1168b0515b53808ecd55c01c47] <==
	I1124 04:19:52.653994       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 04:20:22.657101       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-303179 -n default-k8s-diff-port-303179
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-303179 -n default-k8s-diff-port-303179: exit status 2 (425.534205ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-303179 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-303179
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-303179:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c6af048d3f8eed722eea7223c0aae47b7c545b8922e6d139f9580c8eda525748",
	        "Created": "2025-11-24T04:17:56.199463475Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 501181,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T04:19:36.134917506Z",
	            "FinishedAt": "2025-11-24T04:19:35.094042629Z"
	        },
	        "Image": "sha256:fbb44bc62521f331457dff002aaa5e1e27856f9e53853b3b3ee62969be454028",
	        "ResolvConfPath": "/var/lib/docker/containers/c6af048d3f8eed722eea7223c0aae47b7c545b8922e6d139f9580c8eda525748/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c6af048d3f8eed722eea7223c0aae47b7c545b8922e6d139f9580c8eda525748/hostname",
	        "HostsPath": "/var/lib/docker/containers/c6af048d3f8eed722eea7223c0aae47b7c545b8922e6d139f9580c8eda525748/hosts",
	        "LogPath": "/var/lib/docker/containers/c6af048d3f8eed722eea7223c0aae47b7c545b8922e6d139f9580c8eda525748/c6af048d3f8eed722eea7223c0aae47b7c545b8922e6d139f9580c8eda525748-json.log",
	        "Name": "/default-k8s-diff-port-303179",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-303179:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-303179",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c6af048d3f8eed722eea7223c0aae47b7c545b8922e6d139f9580c8eda525748",
	                "LowerDir": "/var/lib/docker/overlay2/f795050361c122f8186f9d116815a241873f66c7dfed963bb16fb3ec6718f306-init/diff:/var/lib/docker/overlay2/d0aa28be488ed1454a92ae6ce1d851cbc3e880fd8a322d63788b1835a346ec13/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f795050361c122f8186f9d116815a241873f66c7dfed963bb16fb3ec6718f306/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f795050361c122f8186f9d116815a241873f66c7dfed963bb16fb3ec6718f306/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f795050361c122f8186f9d116815a241873f66c7dfed963bb16fb3ec6718f306/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-303179",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-303179/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-303179",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-303179",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-303179",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3e640710dbb3060c2921575558df3e127df7bd22606e056c820d04b626c1f3cc",
	            "SandboxKey": "/var/run/docker/netns/3e640710dbb3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-303179": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:06:64:0c:a7:66",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7dd701f3791fa7f6d8831a64698f944225df32ea42e663c9bfc78d30eb09b5d6",
	                    "EndpointID": "293ef22cb9810246222e3d80c92d7380f75c0de283695bc3203d3c5c4709eec6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-303179",
	                        "c6af048d3f8e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-303179 -n default-k8s-diff-port-303179
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-303179 -n default-k8s-diff-port-303179: exit status 2 (367.124034ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-303179 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-303179 logs -n 25: (1.317768184s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-600301 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │                     │
	│ delete  │ -p no-preload-600301                                                                                                                                                                                                                          │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ delete  │ -p no-preload-600301                                                                                                                                                                                                                          │ no-preload-600301            │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ delete  │ -p disable-driver-mounts-995056                                                                                                                                                                                                               │ disable-driver-mounts-995056 │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:17 UTC │
	│ start   │ -p default-k8s-diff-port-303179 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-303179 │ jenkins │ v1.37.0 │ 24 Nov 25 04:17 UTC │ 24 Nov 25 04:19 UTC │
	│ image   │ embed-certs-520529 image list --format=json                                                                                                                                                                                                   │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │ 24 Nov 25 04:18 UTC │
	│ pause   │ -p embed-certs-520529 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │                     │
	│ delete  │ -p embed-certs-520529                                                                                                                                                                                                                         │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │ 24 Nov 25 04:18 UTC │
	│ delete  │ -p embed-certs-520529                                                                                                                                                                                                                         │ embed-certs-520529           │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │ 24 Nov 25 04:18 UTC │
	│ start   │ -p newest-cni-543467 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:18 UTC │ 24 Nov 25 04:19 UTC │
	│ addons  │ enable metrics-server -p newest-cni-543467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │                     │
	│ stop    │ -p newest-cni-543467 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:19 UTC │
	│ addons  │ enable dashboard -p newest-cni-543467 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:19 UTC │
	│ start   │ -p newest-cni-543467 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:19 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-303179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-303179 │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-303179 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-303179 │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:19 UTC │
	│ image   │ newest-cni-543467 image list --format=json                                                                                                                                                                                                    │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:19 UTC │
	│ pause   │ -p newest-cni-543467 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-303179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-303179 │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:19 UTC │
	│ start   │ -p default-k8s-diff-port-303179 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-303179 │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:20 UTC │
	│ delete  │ -p newest-cni-543467                                                                                                                                                                                                                          │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:19 UTC │
	│ delete  │ -p newest-cni-543467                                                                                                                                                                                                                          │ newest-cni-543467            │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │ 24 Nov 25 04:19 UTC │
	│ start   │ -p auto-778509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-778509                  │ jenkins │ v1.37.0 │ 24 Nov 25 04:19 UTC │                     │
	│ image   │ default-k8s-diff-port-303179 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-303179 │ jenkins │ v1.37.0 │ 24 Nov 25 04:20 UTC │ 24 Nov 25 04:20 UTC │
	│ pause   │ -p default-k8s-diff-port-303179 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-303179 │ jenkins │ v1.37.0 │ 24 Nov 25 04:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 04:19:41
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 04:19:41.714561  502762 out.go:360] Setting OutFile to fd 1 ...
	I1124 04:19:41.714787  502762 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:19:41.714820  502762 out.go:374] Setting ErrFile to fd 2...
	I1124 04:19:41.714843  502762 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:19:41.715126  502762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 04:19:41.715597  502762 out.go:368] Setting JSON to false
	I1124 04:19:41.716579  502762 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10911,"bootTime":1763947071,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 04:19:41.716691  502762 start.go:143] virtualization:  
	I1124 04:19:41.720399  502762 out.go:179] * [auto-778509] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 04:19:41.723984  502762 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 04:19:41.724063  502762 notify.go:221] Checking for updates...
	I1124 04:19:41.731442  502762 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 04:19:41.734577  502762 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:19:41.737841  502762 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	I1124 04:19:41.740942  502762 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 04:19:41.743852  502762 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 04:19:41.747232  502762 config.go:182] Loaded profile config "default-k8s-diff-port-303179": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:19:41.747374  502762 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 04:19:41.792168  502762 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 04:19:41.792292  502762 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:19:41.881715  502762 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-24 04:19:41.868627583 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:19:41.881866  502762 docker.go:319] overlay module found
	I1124 04:19:41.885136  502762 out.go:179] * Using the docker driver based on user configuration
	I1124 04:19:41.888063  502762 start.go:309] selected driver: docker
	I1124 04:19:41.888080  502762 start.go:927] validating driver "docker" against <nil>
	I1124 04:19:41.888094  502762 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 04:19:41.888865  502762 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:19:41.980590  502762 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-24 04:19:41.970535249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:19:41.980737  502762 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 04:19:41.980954  502762 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 04:19:41.983793  502762 out.go:179] * Using Docker driver with root privileges
	I1124 04:19:41.986794  502762 cni.go:84] Creating CNI manager for ""
	I1124 04:19:41.986883  502762 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:19:41.986897  502762 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 04:19:41.986994  502762 start.go:353] cluster config:
	{Name:auto-778509 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-778509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1124 04:19:41.990241  502762 out.go:179] * Starting "auto-778509" primary control-plane node in "auto-778509" cluster
	I1124 04:19:41.993106  502762 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 04:19:41.996095  502762 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 04:19:41.998914  502762 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:19:41.998986  502762 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 04:19:41.999002  502762 cache.go:65] Caching tarball of preloaded images
	I1124 04:19:41.999106  502762 preload.go:238] Found /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 04:19:41.999123  502762 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 04:19:41.999258  502762 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/config.json ...
	I1124 04:19:41.999287  502762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/config.json: {Name:mkc5b7fac5f8da08cfeeb4fbe9dcebf6c531abcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:41.999487  502762 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 04:19:42.029240  502762 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 04:19:42.029271  502762 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 04:19:42.029288  502762 cache.go:243] Successfully downloaded all kic artifacts
	I1124 04:19:42.029328  502762 start.go:360] acquireMachinesLock for auto-778509: {Name:mkfa7cae0269d4581c03d0cc14aab7d3f8ab8b40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 04:19:42.029442  502762 start.go:364] duration metric: took 92.441µs to acquireMachinesLock for "auto-778509"
	I1124 04:19:42.029474  502762 start.go:93] Provisioning new machine with config: &{Name:auto-778509 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-778509 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 04:19:42.029553  502762 start.go:125] createHost starting for "" (driver="docker")
	I1124 04:19:41.091576  500996 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 04:19:41.091603  500996 machine.go:97] duration metric: took 4.563202446s to provisionDockerMachine
	I1124 04:19:41.091615  500996 start.go:293] postStartSetup for "default-k8s-diff-port-303179" (driver="docker")
	I1124 04:19:41.091627  500996 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 04:19:41.091716  500996 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 04:19:41.091766  500996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:19:41.125225  500996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa Username:docker}
	I1124 04:19:41.261038  500996 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 04:19:41.264526  500996 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 04:19:41.264574  500996 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 04:19:41.264586  500996 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/addons for local assets ...
	I1124 04:19:41.264640  500996 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/files for local assets ...
	I1124 04:19:41.264727  500996 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem -> 2913892.pem in /etc/ssl/certs
	I1124 04:19:41.264835  500996 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 04:19:41.272257  500996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:19:41.299880  500996 start.go:296] duration metric: took 208.249493ms for postStartSetup
	I1124 04:19:41.299977  500996 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 04:19:41.300023  500996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:19:41.320519  500996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa Username:docker}
	I1124 04:19:41.424440  500996 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 04:19:41.430173  500996 fix.go:56] duration metric: took 5.366786919s for fixHost
	I1124 04:19:41.430201  500996 start.go:83] releasing machines lock for "default-k8s-diff-port-303179", held for 5.366841475s
	I1124 04:19:41.430268  500996 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-303179
	I1124 04:19:41.457123  500996 ssh_runner.go:195] Run: cat /version.json
	I1124 04:19:41.457184  500996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:19:41.457465  500996 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 04:19:41.457521  500996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:19:41.489367  500996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa Username:docker}
	I1124 04:19:41.506271  500996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa Username:docker}
	I1124 04:19:41.695530  500996 ssh_runner.go:195] Run: systemctl --version
	I1124 04:19:41.702211  500996 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 04:19:41.747854  500996 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 04:19:41.754158  500996 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 04:19:41.754234  500996 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 04:19:41.763789  500996 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 04:19:41.763817  500996 start.go:496] detecting cgroup driver to use...
	I1124 04:19:41.763849  500996 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 04:19:41.763912  500996 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 04:19:41.782997  500996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 04:19:41.805232  500996 docker.go:218] disabling cri-docker service (if available) ...
	I1124 04:19:41.805294  500996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 04:19:41.834508  500996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 04:19:41.858970  500996 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 04:19:42.047602  500996 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 04:19:42.253237  500996 docker.go:234] disabling docker service ...
	I1124 04:19:42.253325  500996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 04:19:42.273360  500996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 04:19:42.289502  500996 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 04:19:42.470535  500996 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 04:19:42.647293  500996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 04:19:42.659972  500996 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 04:19:42.679686  500996 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 04:19:42.679751  500996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:42.693117  500996 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 04:19:42.693186  500996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:42.706190  500996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:42.721986  500996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:42.731746  500996 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 04:19:42.769860  500996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:42.796741  500996 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:42.806041  500996 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:42.825062  500996 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 04:19:42.833856  500996 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 04:19:42.842563  500996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:19:43.031058  500996 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 04:19:43.262697  500996 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 04:19:43.262769  500996 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 04:19:43.267143  500996 start.go:564] Will wait 60s for crictl version
	I1124 04:19:43.267218  500996 ssh_runner.go:195] Run: which crictl
	I1124 04:19:43.271618  500996 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 04:19:43.304681  500996 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 04:19:43.304762  500996 ssh_runner.go:195] Run: crio --version
	I1124 04:19:43.336279  500996 ssh_runner.go:195] Run: crio --version
	I1124 04:19:43.374027  500996 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 04:19:43.376931  500996 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-303179 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 04:19:43.396047  500996 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 04:19:43.400619  500996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:19:43.410613  500996 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-303179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303179 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 04:19:43.410755  500996 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:19:43.410807  500996 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:19:43.451172  500996 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:19:43.451194  500996 crio.go:433] Images already preloaded, skipping extraction
	I1124 04:19:43.451252  500996 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:19:43.480316  500996 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:19:43.480389  500996 cache_images.go:86] Images are preloaded, skipping loading
	I1124 04:19:43.480411  500996 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1124 04:19:43.480550  500996 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-303179 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 04:19:43.480672  500996 ssh_runner.go:195] Run: crio config
	I1124 04:19:43.548149  500996 cni.go:84] Creating CNI manager for ""
	I1124 04:19:43.548219  500996 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:19:43.548275  500996 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 04:19:43.548318  500996 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-303179 NodeName:default-k8s-diff-port-303179 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 04:19:43.548515  500996 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-303179"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 04:19:43.548634  500996 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 04:19:43.557227  500996 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 04:19:43.557348  500996 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 04:19:43.565398  500996 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1124 04:19:43.579269  500996 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 04:19:43.592983  500996 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1124 04:19:43.606361  500996 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 04:19:43.611101  500996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:19:43.620974  500996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:19:43.770597  500996 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:19:43.788250  500996 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179 for IP: 192.168.85.2
	I1124 04:19:43.788324  500996 certs.go:195] generating shared ca certs ...
	I1124 04:19:43.788357  500996 certs.go:227] acquiring lock for ca certs: {Name:mk13d1a72b5c7901cf99d88e558f62c6b8512807 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:43.788543  500996 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key
	I1124 04:19:43.788642  500996 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key
	I1124 04:19:43.788670  500996 certs.go:257] generating profile certs ...
	I1124 04:19:43.788807  500996 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/client.key
	I1124 04:19:43.788916  500996 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.key.0cae04f4
	I1124 04:19:43.789023  500996 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/proxy-client.key
	I1124 04:19:43.789196  500996 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem (1338 bytes)
	W1124 04:19:43.789271  500996 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389_empty.pem, impossibly tiny 0 bytes
	I1124 04:19:43.789300  500996 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 04:19:43.789374  500996 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem (1082 bytes)
	I1124 04:19:43.789432  500996 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem (1123 bytes)
	I1124 04:19:43.789498  500996 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem (1675 bytes)
	I1124 04:19:43.789589  500996 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:19:43.790408  500996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 04:19:43.858294  500996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 04:19:43.887335  500996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 04:19:43.923632  500996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 04:19:43.951242  500996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 04:19:43.990988  500996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 04:19:44.016864  500996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 04:19:44.037994  500996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 04:19:44.089306  500996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /usr/share/ca-certificates/2913892.pem (1708 bytes)
	I1124 04:19:44.155766  500996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 04:19:44.184757  500996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem --> /usr/share/ca-certificates/291389.pem (1338 bytes)
	I1124 04:19:44.207627  500996 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 04:19:44.222325  500996 ssh_runner.go:195] Run: openssl version
	I1124 04:19:44.229324  500996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2913892.pem && ln -fs /usr/share/ca-certificates/2913892.pem /etc/ssl/certs/2913892.pem"
	I1124 04:19:44.238997  500996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2913892.pem
	I1124 04:19:44.243258  500996 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 03:20 /usr/share/ca-certificates/2913892.pem
	I1124 04:19:44.243341  500996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2913892.pem
	I1124 04:19:44.301149  500996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2913892.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 04:19:44.311769  500996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 04:19:44.321037  500996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:19:44.325315  500996 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 03:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:19:44.325385  500996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:19:44.367344  500996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 04:19:44.376609  500996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291389.pem && ln -fs /usr/share/ca-certificates/291389.pem /etc/ssl/certs/291389.pem"
	I1124 04:19:44.386665  500996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291389.pem
	I1124 04:19:44.391184  500996 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 03:20 /usr/share/ca-certificates/291389.pem
	I1124 04:19:44.391262  500996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291389.pem
	I1124 04:19:44.435577  500996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291389.pem /etc/ssl/certs/51391683.0"
	I1124 04:19:44.444653  500996 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 04:19:44.450874  500996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 04:19:44.493512  500996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 04:19:44.536190  500996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 04:19:44.592267  500996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 04:19:44.688152  500996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 04:19:44.770155  500996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 04:19:44.868237  500996 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-303179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303179 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:19:44.868339  500996 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 04:19:44.868400  500996 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 04:19:44.986834  500996 cri.go:89] found id: "e9dbfcfcc198e21983dc97bf121184dd3db9248de5fd970ee04f8ed5f32a25ed"
	I1124 04:19:44.986872  500996 cri.go:89] found id: "99ae33342ea980542a5d01e94e4e877f3a9a7f61e7804bf44fc417104b2c8f75"
	I1124 04:19:44.986880  500996 cri.go:89] found id: "5d9db75c10b0014f3fe772d0746170c1ac112901a7f81fceee9ad108d08be4d4"
	I1124 04:19:44.986883  500996 cri.go:89] found id: "7b5099e4fd3c18fc391ec751d92268e0f783642d7729eae47a7899934d2bf05a"
	I1124 04:19:44.986887  500996 cri.go:89] found id: ""
	I1124 04:19:44.986943  500996 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 04:19:45.049575  500996 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T04:19:45Z" level=error msg="open /run/runc: no such file or directory"
	I1124 04:19:45.049691  500996 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 04:19:45.094902  500996 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 04:19:45.094926  500996 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 04:19:45.094992  500996 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 04:19:45.133776  500996 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 04:19:45.134221  500996 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-303179" does not appear in /home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:19:45.134350  500996 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-289526/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-303179" cluster setting kubeconfig missing "default-k8s-diff-port-303179" context setting]
	I1124 04:19:45.134730  500996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/kubeconfig: {Name:mkb927d54c1c5489b1562496c31c4a11f46f8c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:45.136521  500996 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 04:19:45.187361  500996 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1124 04:19:45.187455  500996 kubeadm.go:602] duration metric: took 92.51304ms to restartPrimaryControlPlane
	I1124 04:19:45.187480  500996 kubeadm.go:403] duration metric: took 319.249917ms to StartCluster
	I1124 04:19:45.187525  500996 settings.go:142] acquiring lock: {Name:mkbb4fe81234aed150705ba63a85f83232f71688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:45.187676  500996 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:19:45.188444  500996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/kubeconfig: {Name:mkb927d54c1c5489b1562496c31c4a11f46f8c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:45.188707  500996 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 04:19:45.189131  500996 config.go:182] Loaded profile config "default-k8s-diff-port-303179": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:19:45.189142  500996 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 04:19:45.189261  500996 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-303179"
	I1124 04:19:45.189280  500996 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-303179"
	W1124 04:19:45.189288  500996 addons.go:248] addon storage-provisioner should already be in state true
	I1124 04:19:45.189316  500996 host.go:66] Checking if "default-k8s-diff-port-303179" exists ...
	I1124 04:19:45.189323  500996 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-303179"
	I1124 04:19:45.189339  500996 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-303179"
	W1124 04:19:45.189346  500996 addons.go:248] addon dashboard should already be in state true
	I1124 04:19:45.189366  500996 host.go:66] Checking if "default-k8s-diff-port-303179" exists ...
	I1124 04:19:45.189865  500996 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303179 --format={{.State.Status}}
	I1124 04:19:45.189949  500996 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303179 --format={{.State.Status}}
	I1124 04:19:45.194712  500996 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-303179"
	I1124 04:19:45.194755  500996 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-303179"
	I1124 04:19:45.195159  500996 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303179 --format={{.State.Status}}
	I1124 04:19:45.195426  500996 out.go:179] * Verifying Kubernetes components...
	I1124 04:19:45.202347  500996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:19:45.247063  500996 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 04:19:45.250492  500996 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:19:45.250522  500996 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 04:19:45.250615  500996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:19:45.281634  500996 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 04:19:45.285053  500996 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 04:19:45.294008  500996 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 04:19:45.294035  500996 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 04:19:45.294257  500996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:19:45.295301  500996 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-303179"
	W1124 04:19:45.295322  500996 addons.go:248] addon default-storageclass should already be in state true
	I1124 04:19:45.295349  500996 host.go:66] Checking if "default-k8s-diff-port-303179" exists ...
	I1124 04:19:45.295792  500996 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303179 --format={{.State.Status}}
	I1124 04:19:45.328139  500996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa Username:docker}
	I1124 04:19:45.356022  500996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa Username:docker}
	I1124 04:19:45.362065  500996 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 04:19:45.362092  500996 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 04:19:45.362171  500996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303179
	I1124 04:19:45.385607  500996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/default-k8s-diff-port-303179/id_rsa Username:docker}
	I1124 04:19:42.034223  502762 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 04:19:42.034617  502762 start.go:159] libmachine.API.Create for "auto-778509" (driver="docker")
	I1124 04:19:42.034677  502762 client.go:173] LocalClient.Create starting
	I1124 04:19:42.034783  502762 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem
	I1124 04:19:42.034847  502762 main.go:143] libmachine: Decoding PEM data...
	I1124 04:19:42.034884  502762 main.go:143] libmachine: Parsing certificate...
	I1124 04:19:42.034983  502762 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem
	I1124 04:19:42.035033  502762 main.go:143] libmachine: Decoding PEM data...
	I1124 04:19:42.035053  502762 main.go:143] libmachine: Parsing certificate...
	I1124 04:19:42.035532  502762 cli_runner.go:164] Run: docker network inspect auto-778509 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 04:19:42.057551  502762 cli_runner.go:211] docker network inspect auto-778509 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 04:19:42.057652  502762 network_create.go:284] running [docker network inspect auto-778509] to gather additional debugging logs...
	I1124 04:19:42.057673  502762 cli_runner.go:164] Run: docker network inspect auto-778509
	W1124 04:19:42.077547  502762 cli_runner.go:211] docker network inspect auto-778509 returned with exit code 1
	I1124 04:19:42.077590  502762 network_create.go:287] error running [docker network inspect auto-778509]: docker network inspect auto-778509: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-778509 not found
	I1124 04:19:42.077607  502762 network_create.go:289] output of [docker network inspect auto-778509]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-778509 not found
	
	** /stderr **
	I1124 04:19:42.077733  502762 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 04:19:42.102281  502762 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-740fb099fccc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:52:7a:9c:b0:4d:41} reservation:<nil>}
	I1124 04:19:42.102791  502762 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4b0f25a7c590 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:53:b3:a1:55:1a} reservation:<nil>}
	I1124 04:19:42.103084  502762 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c1d995330d2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:83:d9:0c:83:10} reservation:<nil>}
	I1124 04:19:42.103532  502762 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a6a4e0}
	I1124 04:19:42.103553  502762 network_create.go:124] attempt to create docker network auto-778509 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1124 04:19:42.103613  502762 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-778509 auto-778509
	I1124 04:19:42.197872  502762 network_create.go:108] docker network auto-778509 192.168.76.0/24 created
	I1124 04:19:42.197915  502762 kic.go:121] calculated static IP "192.168.76.2" for the "auto-778509" container
	I1124 04:19:42.197998  502762 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 04:19:42.225948  502762 cli_runner.go:164] Run: docker volume create auto-778509 --label name.minikube.sigs.k8s.io=auto-778509 --label created_by.minikube.sigs.k8s.io=true
	I1124 04:19:42.258689  502762 oci.go:103] Successfully created a docker volume auto-778509
	I1124 04:19:42.258776  502762 cli_runner.go:164] Run: docker run --rm --name auto-778509-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-778509 --entrypoint /usr/bin/test -v auto-778509:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 04:19:42.875932  502762 oci.go:107] Successfully prepared a docker volume auto-778509
	I1124 04:19:42.875993  502762 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:19:42.876006  502762 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 04:19:42.876073  502762 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-778509:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 04:19:45.761317  500996 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:19:45.788586  500996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 04:19:45.813271  500996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:19:45.919766  500996 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 04:19:45.919802  500996 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 04:19:45.925214  500996 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-303179" to be "Ready" ...
	I1124 04:19:46.131204  500996 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 04:19:46.131234  500996 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 04:19:46.224377  500996 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 04:19:46.224415  500996 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 04:19:46.292863  500996 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 04:19:46.292904  500996 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 04:19:46.350264  500996 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 04:19:46.350292  500996 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 04:19:46.395116  500996 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 04:19:46.395143  500996 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 04:19:46.450354  500996 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 04:19:46.450390  500996 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 04:19:46.479382  500996 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 04:19:46.479409  500996 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 04:19:46.503776  500996 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 04:19:46.503807  500996 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 04:19:46.518702  500996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 04:19:48.397833  502762 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-778509:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (5.521707022s)
	I1124 04:19:48.397883  502762 kic.go:203] duration metric: took 5.521872061s to extract preloaded images to volume ...
	W1124 04:19:48.398039  502762 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 04:19:48.398150  502762 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 04:19:48.511173  502762 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-778509 --name auto-778509 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-778509 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-778509 --network auto-778509 --ip 192.168.76.2 --volume auto-778509:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 04:19:48.958585  502762 cli_runner.go:164] Run: docker container inspect auto-778509 --format={{.State.Running}}
	I1124 04:19:48.980560  502762 cli_runner.go:164] Run: docker container inspect auto-778509 --format={{.State.Status}}
	I1124 04:19:49.021781  502762 cli_runner.go:164] Run: docker exec auto-778509 stat /var/lib/dpkg/alternatives/iptables
	I1124 04:19:49.101621  502762 oci.go:144] the created container "auto-778509" has a running status.
	I1124 04:19:49.101656  502762 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-289526/.minikube/machines/auto-778509/id_rsa...
	I1124 04:19:49.505099  502762 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-289526/.minikube/machines/auto-778509/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 04:19:49.541949  502762 cli_runner.go:164] Run: docker container inspect auto-778509 --format={{.State.Status}}
	I1124 04:19:49.574957  502762 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 04:19:49.574983  502762 kic_runner.go:114] Args: [docker exec --privileged auto-778509 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 04:19:49.652064  502762 cli_runner.go:164] Run: docker container inspect auto-778509 --format={{.State.Status}}
	I1124 04:19:49.684078  502762 machine.go:94] provisionDockerMachine start ...
	I1124 04:19:49.684169  502762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-778509
	I1124 04:19:49.708173  502762 main.go:143] libmachine: Using SSH client type: native
	I1124 04:19:49.708504  502762 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1124 04:19:49.708513  502762 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 04:19:49.709201  502762 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 04:19:51.643750  500996 node_ready.go:49] node "default-k8s-diff-port-303179" is "Ready"
	I1124 04:19:51.643779  500996 node_ready.go:38] duration metric: took 5.718530892s for node "default-k8s-diff-port-303179" to be "Ready" ...
	I1124 04:19:51.643793  500996 api_server.go:52] waiting for apiserver process to appear ...
	I1124 04:19:51.643852  500996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 04:19:51.868994  500996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.080369953s)
	I1124 04:19:53.182988  500996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.369677982s)
	I1124 04:19:53.183125  500996 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.664388185s)
	I1124 04:19:53.183257  500996 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.539393244s)
	I1124 04:19:53.183277  500996 api_server.go:72] duration metric: took 7.994534401s to wait for apiserver process to appear ...
	I1124 04:19:53.183284  500996 api_server.go:88] waiting for apiserver healthz status ...
	I1124 04:19:53.183305  500996 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1124 04:19:53.186530  500996 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-303179 addons enable metrics-server
	
	I1124 04:19:53.189348  500996 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1124 04:19:53.192264  500996 addons.go:530] duration metric: took 8.003124691s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1124 04:19:53.202819  500996 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 04:19:53.202851  500996 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 04:19:53.684024  500996 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1124 04:19:53.693702  500996 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1124 04:19:53.694831  500996 api_server.go:141] control plane version: v1.34.1
	I1124 04:19:53.694859  500996 api_server.go:131] duration metric: took 511.564647ms to wait for apiserver health ...
	I1124 04:19:53.694872  500996 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 04:19:53.701883  500996 system_pods.go:59] 8 kube-system pods found
	I1124 04:19:53.701927  500996 system_pods.go:61] "coredns-66bc5c9577-jtn7v" [cd5d148d-8e9e-4bac-a54c-d71637a8cb0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:19:53.701942  500996 system_pods.go:61] "etcd-default-k8s-diff-port-303179" [e10607ab-490f-4a61-a1f9-a3c5c06f86b7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 04:19:53.701948  500996 system_pods.go:61] "kindnet-wpp6p" [0a1f5799-1a90-4c0a-a0a3-9508d80fd8f3] Running
	I1124 04:19:53.701960  500996 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-303179" [6f48a510-e83c-4667-a542-5953227201ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 04:19:53.701967  500996 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-303179" [6f1d9347-dbe0-4770-b829-de7cf4fe9934] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 04:19:53.701977  500996 system_pods.go:61] "kube-proxy-dxbvb" [24177ca5-eb2f-4ac2-a32c-d384781bad58] Running
	I1124 04:19:53.701985  500996 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-303179" [b819c0ad-3c09-46e4-84a8-e7f1ad21b768] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 04:19:53.701989  500996 system_pods.go:61] "storage-provisioner" [4d7d1174-e169-4297-a8a2-55a47f03d9d6] Running
	I1124 04:19:53.701995  500996 system_pods.go:74] duration metric: took 7.112865ms to wait for pod list to return data ...
	I1124 04:19:53.702007  500996 default_sa.go:34] waiting for default service account to be created ...
	I1124 04:19:53.710162  500996 default_sa.go:45] found service account: "default"
	I1124 04:19:53.710191  500996 default_sa.go:55] duration metric: took 8.176615ms for default service account to be created ...
	I1124 04:19:53.710208  500996 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 04:19:53.717323  500996 system_pods.go:86] 8 kube-system pods found
	I1124 04:19:53.717361  500996 system_pods.go:89] "coredns-66bc5c9577-jtn7v" [cd5d148d-8e9e-4bac-a54c-d71637a8cb0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 04:19:53.717371  500996 system_pods.go:89] "etcd-default-k8s-diff-port-303179" [e10607ab-490f-4a61-a1f9-a3c5c06f86b7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 04:19:53.717377  500996 system_pods.go:89] "kindnet-wpp6p" [0a1f5799-1a90-4c0a-a0a3-9508d80fd8f3] Running
	I1124 04:19:53.717384  500996 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-303179" [6f48a510-e83c-4667-a542-5953227201ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 04:19:53.717391  500996 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-303179" [6f1d9347-dbe0-4770-b829-de7cf4fe9934] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 04:19:53.717396  500996 system_pods.go:89] "kube-proxy-dxbvb" [24177ca5-eb2f-4ac2-a32c-d384781bad58] Running
	I1124 04:19:53.717402  500996 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-303179" [b819c0ad-3c09-46e4-84a8-e7f1ad21b768] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 04:19:53.717407  500996 system_pods.go:89] "storage-provisioner" [4d7d1174-e169-4297-a8a2-55a47f03d9d6] Running
	I1124 04:19:53.717415  500996 system_pods.go:126] duration metric: took 7.199931ms to wait for k8s-apps to be running ...
	I1124 04:19:53.717432  500996 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 04:19:53.717488  500996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 04:19:53.732908  500996 system_svc.go:56] duration metric: took 15.477675ms WaitForService to wait for kubelet
	I1124 04:19:53.732946  500996 kubeadm.go:587] duration metric: took 8.544193668s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 04:19:53.732965  500996 node_conditions.go:102] verifying NodePressure condition ...
	I1124 04:19:53.742917  500996 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 04:19:53.742953  500996 node_conditions.go:123] node cpu capacity is 2
	I1124 04:19:53.742966  500996 node_conditions.go:105] duration metric: took 9.995813ms to run NodePressure ...
	I1124 04:19:53.742980  500996 start.go:242] waiting for startup goroutines ...
	I1124 04:19:53.742988  500996 start.go:247] waiting for cluster config update ...
	I1124 04:19:53.743001  500996 start.go:256] writing updated cluster config ...
	I1124 04:19:53.743269  500996 ssh_runner.go:195] Run: rm -f paused
	I1124 04:19:53.748360  500996 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 04:19:53.753936  500996 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jtn7v" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:19:52.886701  502762 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-778509
	
	I1124 04:19:52.886728  502762 ubuntu.go:182] provisioning hostname "auto-778509"
	I1124 04:19:52.886817  502762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-778509
	I1124 04:19:52.911383  502762 main.go:143] libmachine: Using SSH client type: native
	I1124 04:19:52.911707  502762 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1124 04:19:52.911725  502762 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-778509 && echo "auto-778509" | sudo tee /etc/hostname
	I1124 04:19:53.109055  502762 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-778509
	
	I1124 04:19:53.109218  502762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-778509
	I1124 04:19:53.137152  502762 main.go:143] libmachine: Using SSH client type: native
	I1124 04:19:53.137459  502762 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1124 04:19:53.137475  502762 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-778509' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-778509/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-778509' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 04:19:53.306528  502762 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 04:19:53.306596  502762 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-289526/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-289526/.minikube}
	I1124 04:19:53.306640  502762 ubuntu.go:190] setting up certificates
	I1124 04:19:53.306693  502762 provision.go:84] configureAuth start
	I1124 04:19:53.306780  502762 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-778509
	I1124 04:19:53.336318  502762 provision.go:143] copyHostCerts
	I1124 04:19:53.336376  502762 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem, removing ...
	I1124 04:19:53.336385  502762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem
	I1124 04:19:53.336472  502762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/ca.pem (1082 bytes)
	I1124 04:19:53.336574  502762 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem, removing ...
	I1124 04:19:53.336580  502762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem
	I1124 04:19:53.336612  502762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/cert.pem (1123 bytes)
	I1124 04:19:53.336669  502762 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem, removing ...
	I1124 04:19:53.336674  502762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem
	I1124 04:19:53.336698  502762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-289526/.minikube/key.pem (1675 bytes)
	I1124 04:19:53.336753  502762 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem org=jenkins.auto-778509 san=[127.0.0.1 192.168.76.2 auto-778509 localhost minikube]
	I1124 04:19:53.721754  502762 provision.go:177] copyRemoteCerts
	I1124 04:19:53.721861  502762 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 04:19:53.721934  502762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-778509
	I1124 04:19:53.754731  502762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/auto-778509/id_rsa Username:docker}
	I1124 04:19:53.859301  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 04:19:53.879396  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 04:19:53.910331  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1124 04:19:53.928398  502762 provision.go:87] duration metric: took 621.664588ms to configureAuth
	I1124 04:19:53.928424  502762 ubuntu.go:206] setting minikube options for container-runtime
	I1124 04:19:53.928612  502762 config.go:182] Loaded profile config "auto-778509": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:19:53.928715  502762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-778509
	I1124 04:19:53.946747  502762 main.go:143] libmachine: Using SSH client type: native
	I1124 04:19:53.947065  502762 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1124 04:19:53.947097  502762 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 04:19:54.258804  502762 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 04:19:54.258884  502762 machine.go:97] duration metric: took 4.574784507s to provisionDockerMachine
	I1124 04:19:54.258910  502762 client.go:176] duration metric: took 12.224218341s to LocalClient.Create
	I1124 04:19:54.258962  502762 start.go:167] duration metric: took 12.224347408s to libmachine.API.Create "auto-778509"
	I1124 04:19:54.258977  502762 start.go:293] postStartSetup for "auto-778509" (driver="docker")
	I1124 04:19:54.258987  502762 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 04:19:54.259050  502762 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 04:19:54.259100  502762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-778509
	I1124 04:19:54.277972  502762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/auto-778509/id_rsa Username:docker}
	I1124 04:19:54.387362  502762 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 04:19:54.390633  502762 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 04:19:54.390717  502762 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 04:19:54.390747  502762 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/addons for local assets ...
	I1124 04:19:54.390802  502762 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-289526/.minikube/files for local assets ...
	I1124 04:19:54.390901  502762 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem -> 2913892.pem in /etc/ssl/certs
	I1124 04:19:54.391010  502762 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 04:19:54.398579  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:19:54.417007  502762 start.go:296] duration metric: took 158.014206ms for postStartSetup
	I1124 04:19:54.417421  502762 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-778509
	I1124 04:19:54.436970  502762 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/config.json ...
	I1124 04:19:54.437267  502762 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 04:19:54.437324  502762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-778509
	I1124 04:19:54.454426  502762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/auto-778509/id_rsa Username:docker}
	I1124 04:19:54.555433  502762 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 04:19:54.560398  502762 start.go:128] duration metric: took 12.530827244s to createHost
	I1124 04:19:54.560424  502762 start.go:83] releasing machines lock for "auto-778509", held for 12.530969653s
	I1124 04:19:54.560506  502762 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-778509
	I1124 04:19:54.577713  502762 ssh_runner.go:195] Run: cat /version.json
	I1124 04:19:54.577769  502762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-778509
	I1124 04:19:54.578066  502762 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 04:19:54.578126  502762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-778509
	I1124 04:19:54.596570  502762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/auto-778509/id_rsa Username:docker}
	I1124 04:19:54.615102  502762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/auto-778509/id_rsa Username:docker}
	I1124 04:19:54.702260  502762 ssh_runner.go:195] Run: systemctl --version
	I1124 04:19:54.792180  502762 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 04:19:54.827165  502762 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 04:19:54.831912  502762 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 04:19:54.832033  502762 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 04:19:54.893192  502762 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 04:19:54.893211  502762 start.go:496] detecting cgroup driver to use...
	I1124 04:19:54.893245  502762 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 04:19:54.893291  502762 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 04:19:54.916849  502762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 04:19:54.935247  502762 docker.go:218] disabling cri-docker service (if available) ...
	I1124 04:19:54.935306  502762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 04:19:54.955604  502762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 04:19:54.975724  502762 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 04:19:55.122650  502762 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 04:19:55.249674  502762 docker.go:234] disabling docker service ...
	I1124 04:19:55.249791  502762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 04:19:55.273380  502762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 04:19:55.287785  502762 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 04:19:55.409530  502762 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 04:19:55.556079  502762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 04:19:55.571109  502762 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 04:19:55.597444  502762 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 04:19:55.597539  502762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:55.616243  502762 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 04:19:55.616352  502762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:55.633202  502762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:55.643954  502762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:55.659551  502762 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 04:19:55.673295  502762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:55.684007  502762 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:55.704229  502762 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 04:19:55.715135  502762 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 04:19:55.722949  502762 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 04:19:55.732001  502762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:19:55.869324  502762 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 04:19:56.065366  502762 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 04:19:56.065444  502762 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 04:19:56.069454  502762 start.go:564] Will wait 60s for crictl version
	I1124 04:19:56.069527  502762 ssh_runner.go:195] Run: which crictl
	I1124 04:19:56.073414  502762 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 04:19:56.103902  502762 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 04:19:56.104075  502762 ssh_runner.go:195] Run: crio --version
	I1124 04:19:56.138255  502762 ssh_runner.go:195] Run: crio --version
	I1124 04:19:56.172724  502762 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 04:19:56.175712  502762 cli_runner.go:164] Run: docker network inspect auto-778509 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 04:19:56.192082  502762 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 04:19:56.196243  502762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:19:56.209101  502762 kubeadm.go:884] updating cluster {Name:auto-778509 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-778509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 04:19:56.209233  502762 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 04:19:56.209297  502762 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:19:56.261061  502762 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:19:56.261087  502762 crio.go:433] Images already preloaded, skipping extraction
	I1124 04:19:56.261145  502762 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 04:19:56.298574  502762 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 04:19:56.298601  502762 cache_images.go:86] Images are preloaded, skipping loading
	I1124 04:19:56.298610  502762 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1124 04:19:56.298744  502762 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-778509 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-778509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 04:19:56.298848  502762 ssh_runner.go:195] Run: crio config
	I1124 04:19:56.401606  502762 cni.go:84] Creating CNI manager for ""
	I1124 04:19:56.401678  502762 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:19:56.401713  502762 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 04:19:56.401760  502762 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-778509 NodeName:auto-778509 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 04:19:56.401940  502762 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-778509"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 04:19:56.402044  502762 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 04:19:56.414553  502762 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 04:19:56.414687  502762 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 04:19:56.423671  502762 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1124 04:19:56.448783  502762 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 04:19:56.469149  502762 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1124 04:19:56.497133  502762 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 04:19:56.501400  502762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 04:19:56.522665  502762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:19:56.721447  502762 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:19:56.751689  502762 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509 for IP: 192.168.76.2
	I1124 04:19:56.751760  502762 certs.go:195] generating shared ca certs ...
	I1124 04:19:56.751791  502762 certs.go:227] acquiring lock for ca certs: {Name:mk13d1a72b5c7901cf99d88e558f62c6b8512807 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:56.751968  502762 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key
	I1124 04:19:56.752054  502762 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key
	I1124 04:19:56.752097  502762 certs.go:257] generating profile certs ...
	I1124 04:19:56.752189  502762 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/client.key
	I1124 04:19:56.752224  502762 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/client.crt with IP's: []
	I1124 04:19:57.247429  502762 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/client.crt ...
	I1124 04:19:57.247462  502762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/client.crt: {Name:mk7f604724bf42f096e7e40c20f10467d20ef986 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:57.247699  502762 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/client.key ...
	I1124 04:19:57.247716  502762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/client.key: {Name:mkc19ae019700138310310707f7a53514ede31fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:57.247860  502762 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/apiserver.key.5341c204
	I1124 04:19:57.247883  502762 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/apiserver.crt.5341c204 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 04:19:57.616421  502762 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/apiserver.crt.5341c204 ...
	I1124 04:19:57.616454  502762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/apiserver.crt.5341c204: {Name:mk527bb9f2a3625f56a62720a6e0c86127eeb952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:57.616669  502762 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/apiserver.key.5341c204 ...
	I1124 04:19:57.616691  502762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/apiserver.key.5341c204: {Name:mkdc480e26e16274f15be0799babe88db18343fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:57.616831  502762 certs.go:382] copying /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/apiserver.crt.5341c204 -> /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/apiserver.crt
	I1124 04:19:57.616950  502762 certs.go:386] copying /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/apiserver.key.5341c204 -> /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/apiserver.key
	I1124 04:19:57.617037  502762 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/proxy-client.key
	I1124 04:19:57.617071  502762 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/proxy-client.crt with IP's: []
	I1124 04:19:59.021044  502762 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/proxy-client.crt ...
	I1124 04:19:59.021071  502762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/proxy-client.crt: {Name:mk1a6098decd76def5555c23226cc66fa41fc11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:59.021225  502762 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/proxy-client.key ...
	I1124 04:19:59.021233  502762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/proxy-client.key: {Name:mk551d9401b2ee0595b5e7123fe4053b13b4b7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:19:59.021400  502762 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem (1338 bytes)
	W1124 04:19:59.021439  502762 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389_empty.pem, impossibly tiny 0 bytes
	I1124 04:19:59.021447  502762 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 04:19:59.021474  502762 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/ca.pem (1082 bytes)
	I1124 04:19:59.021501  502762 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/cert.pem (1123 bytes)
	I1124 04:19:59.021526  502762 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/certs/key.pem (1675 bytes)
	I1124 04:19:59.021571  502762 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem (1708 bytes)
	I1124 04:19:59.022132  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 04:19:59.045619  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 04:19:59.071174  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 04:19:59.100826  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 04:19:59.121669  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1124 04:19:59.165965  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 04:19:59.205118  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 04:19:59.252447  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 04:19:59.273789  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 04:19:59.293675  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/certs/291389.pem --> /usr/share/ca-certificates/291389.pem (1338 bytes)
	I1124 04:19:59.312858  502762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/ssl/certs/2913892.pem --> /usr/share/ca-certificates/2913892.pem (1708 bytes)
	I1124 04:19:59.332179  502762 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 04:19:59.346409  502762 ssh_runner.go:195] Run: openssl version
	I1124 04:19:59.353209  502762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 04:19:59.362292  502762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:19:59.366356  502762 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 03:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:19:59.366416  502762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 04:19:59.409012  502762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 04:19:59.418088  502762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/291389.pem && ln -fs /usr/share/ca-certificates/291389.pem /etc/ssl/certs/291389.pem"
	I1124 04:19:59.427069  502762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/291389.pem
	I1124 04:19:59.431417  502762 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 03:20 /usr/share/ca-certificates/291389.pem
	I1124 04:19:59.431531  502762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/291389.pem
	I1124 04:19:59.473344  502762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/291389.pem /etc/ssl/certs/51391683.0"
	I1124 04:19:59.482504  502762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2913892.pem && ln -fs /usr/share/ca-certificates/2913892.pem /etc/ssl/certs/2913892.pem"
	I1124 04:19:59.491763  502762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2913892.pem
	I1124 04:19:59.496190  502762 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 03:20 /usr/share/ca-certificates/2913892.pem
	I1124 04:19:59.496309  502762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2913892.pem
	I1124 04:19:59.538182  502762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2913892.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 04:19:59.547469  502762 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 04:19:59.552073  502762 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 04:19:59.552173  502762 kubeadm.go:401] StartCluster: {Name:auto-778509 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-778509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 04:19:59.552300  502762 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 04:19:59.552391  502762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 04:19:59.592114  502762 cri.go:89] found id: ""
	I1124 04:19:59.592294  502762 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 04:19:59.603889  502762 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 04:19:59.616566  502762 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 04:19:59.616722  502762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 04:19:59.628667  502762 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 04:19:59.628746  502762 kubeadm.go:158] found existing configuration files:
	
	I1124 04:19:59.628831  502762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 04:19:59.640858  502762 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 04:19:59.640967  502762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 04:19:59.649988  502762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 04:19:59.659579  502762 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 04:19:59.659723  502762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 04:19:59.668143  502762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 04:19:59.677347  502762 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 04:19:59.677477  502762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 04:19:59.687580  502762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 04:19:59.699341  502762 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 04:19:59.699470  502762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 04:19:59.710770  502762 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 04:19:59.771544  502762 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 04:19:59.772197  502762 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 04:19:59.800957  502762 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 04:19:59.801070  502762 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 04:19:59.801135  502762 kubeadm.go:319] OS: Linux
	I1124 04:19:59.801214  502762 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 04:19:59.801283  502762 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 04:19:59.801354  502762 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 04:19:59.801423  502762 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 04:19:59.801503  502762 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 04:19:59.801573  502762 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 04:19:59.801652  502762 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 04:19:59.801718  502762 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 04:19:59.801782  502762 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 04:19:59.926785  502762 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 04:19:59.926938  502762 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 04:19:59.927057  502762 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 04:19:59.954871  502762 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1124 04:19:55.759927  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	W1124 04:19:57.761659  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	W1124 04:19:59.764066  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	I1124 04:19:59.962585  502762 out.go:252]   - Generating certificates and keys ...
	I1124 04:19:59.962724  502762 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 04:19:59.962816  502762 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 04:20:00.302564  502762 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 04:20:00.928473  502762 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 04:20:01.089545  502762 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 04:20:01.594378  502762 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	W1124 04:20:01.780480  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	W1124 04:20:04.262358  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	I1124 04:20:03.650805  502762 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 04:20:03.650950  502762 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-778509 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 04:20:04.844634  502762 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 04:20:04.844777  502762 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-778509 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 04:20:05.556431  502762 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 04:20:06.435154  502762 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 04:20:06.820544  502762 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 04:20:06.821124  502762 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 04:20:06.983752  502762 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 04:20:07.671314  502762 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 04:20:08.794139  502762 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 04:20:09.578872  502762 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 04:20:09.975638  502762 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 04:20:09.976332  502762 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 04:20:09.978834  502762 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1124 04:20:06.761462  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	W1124 04:20:09.261839  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	I1124 04:20:09.982152  502762 out.go:252]   - Booting up control plane ...
	I1124 04:20:09.982266  502762 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 04:20:09.982345  502762 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 04:20:09.984280  502762 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 04:20:10.013293  502762 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 04:20:10.013407  502762 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 04:20:10.017662  502762 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 04:20:10.020282  502762 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 04:20:10.020344  502762 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 04:20:10.166949  502762 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 04:20:10.167070  502762 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 04:20:11.169806  502762 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001813204s
	I1124 04:20:11.171972  502762 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 04:20:11.172327  502762 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1124 04:20:11.172645  502762 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 04:20:11.173469  502762 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1124 04:20:11.760639  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	W1124 04:20:14.260151  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	I1124 04:20:13.694654  502762 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.520372444s
	I1124 04:20:14.953882  502762 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.779333017s
	I1124 04:20:16.675325  502762 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.502269303s
	I1124 04:20:16.696242  502762 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 04:20:16.712222  502762 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 04:20:16.730801  502762 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 04:20:16.731019  502762 kubeadm.go:319] [mark-control-plane] Marking the node auto-778509 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 04:20:16.744790  502762 kubeadm.go:319] [bootstrap-token] Using token: 81yeiu.qp3nz3md4mckox5j
	I1124 04:20:16.747861  502762 out.go:252]   - Configuring RBAC rules ...
	I1124 04:20:16.748011  502762 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 04:20:16.753293  502762 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 04:20:16.767878  502762 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 04:20:16.775459  502762 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 04:20:16.780101  502762 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 04:20:16.784711  502762 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 04:20:17.087314  502762 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 04:20:17.555386  502762 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 04:20:18.089788  502762 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 04:20:18.090211  502762 kubeadm.go:319] 
	I1124 04:20:18.090294  502762 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 04:20:18.090305  502762 kubeadm.go:319] 
	I1124 04:20:18.090390  502762 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 04:20:18.090400  502762 kubeadm.go:319] 
	I1124 04:20:18.090425  502762 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 04:20:18.090538  502762 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 04:20:18.090597  502762 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 04:20:18.090606  502762 kubeadm.go:319] 
	I1124 04:20:18.090660  502762 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 04:20:18.090669  502762 kubeadm.go:319] 
	I1124 04:20:18.090717  502762 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 04:20:18.090723  502762 kubeadm.go:319] 
	I1124 04:20:18.090775  502762 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 04:20:18.090855  502762 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 04:20:18.090926  502762 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 04:20:18.090935  502762 kubeadm.go:319] 
	I1124 04:20:18.091036  502762 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 04:20:18.091126  502762 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 04:20:18.091138  502762 kubeadm.go:319] 
	I1124 04:20:18.091223  502762 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 81yeiu.qp3nz3md4mckox5j \
	I1124 04:20:18.091333  502762 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2194168f8292dc0469a2699229b86460c4906ab7e633f8753935c94ddff37855 \
	I1124 04:20:18.091358  502762 kubeadm.go:319] 	--control-plane 
	I1124 04:20:18.091367  502762 kubeadm.go:319] 
	I1124 04:20:18.091451  502762 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 04:20:18.091460  502762 kubeadm.go:319] 
	I1124 04:20:18.091543  502762 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 81yeiu.qp3nz3md4mckox5j \
	I1124 04:20:18.091649  502762 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2194168f8292dc0469a2699229b86460c4906ab7e633f8753935c94ddff37855 
	I1124 04:20:18.095449  502762 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 04:20:18.095676  502762 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 04:20:18.095792  502762 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 04:20:18.095815  502762 cni.go:84] Creating CNI manager for ""
	I1124 04:20:18.095823  502762 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 04:20:18.099085  502762 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1124 04:20:16.260331  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	W1124 04:20:18.261693  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	I1124 04:20:18.102070  502762 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 04:20:18.106654  502762 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 04:20:18.106683  502762 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 04:20:18.122291  502762 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 04:20:18.515789  502762 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 04:20:18.515930  502762 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:20:18.516007  502762 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-778509 minikube.k8s.io/updated_at=2025_11_24T04_20_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=auto-778509 minikube.k8s.io/primary=true
	I1124 04:20:18.749573  502762 ops.go:34] apiserver oom_adj: -16
	I1124 04:20:18.749683  502762 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:20:19.249730  502762 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:20:19.750591  502762 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:20:20.249772  502762 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:20:20.750031  502762 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:20:21.249787  502762 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:20:21.749778  502762 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:20:22.249943  502762 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:20:22.750659  502762 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 04:20:22.870719  502762 kubeadm.go:1114] duration metric: took 4.354833647s to wait for elevateKubeSystemPrivileges
	I1124 04:20:22.870748  502762 kubeadm.go:403] duration metric: took 23.318581272s to StartCluster
	I1124 04:20:22.870765  502762 settings.go:142] acquiring lock: {Name:mkbb4fe81234aed150705ba63a85f83232f71688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:20:22.870828  502762 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:20:22.871900  502762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/kubeconfig: {Name:mkb927d54c1c5489b1562496c31c4a11f46f8c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 04:20:22.872124  502762 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 04:20:22.872289  502762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 04:20:22.872565  502762 config.go:182] Loaded profile config "auto-778509": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:20:22.872599  502762 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 04:20:22.872658  502762 addons.go:70] Setting storage-provisioner=true in profile "auto-778509"
	I1124 04:20:22.872673  502762 addons.go:239] Setting addon storage-provisioner=true in "auto-778509"
	I1124 04:20:22.872694  502762 host.go:66] Checking if "auto-778509" exists ...
	I1124 04:20:22.873226  502762 cli_runner.go:164] Run: docker container inspect auto-778509 --format={{.State.Status}}
	I1124 04:20:22.873823  502762 addons.go:70] Setting default-storageclass=true in profile "auto-778509"
	I1124 04:20:22.873847  502762 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-778509"
	I1124 04:20:22.874137  502762 cli_runner.go:164] Run: docker container inspect auto-778509 --format={{.State.Status}}
	I1124 04:20:22.875908  502762 out.go:179] * Verifying Kubernetes components...
	I1124 04:20:22.879168  502762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 04:20:22.931664  502762 addons.go:239] Setting addon default-storageclass=true in "auto-778509"
	I1124 04:20:22.931703  502762 host.go:66] Checking if "auto-778509" exists ...
	I1124 04:20:22.932167  502762 cli_runner.go:164] Run: docker container inspect auto-778509 --format={{.State.Status}}
	I1124 04:20:22.936599  502762 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 04:20:22.939497  502762 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:20:22.939522  502762 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 04:20:22.939591  502762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-778509
	I1124 04:20:22.971298  502762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/auto-778509/id_rsa Username:docker}
	I1124 04:20:22.978679  502762 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 04:20:22.978700  502762 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 04:20:22.978758  502762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-778509
	I1124 04:20:23.004493  502762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/auto-778509/id_rsa Username:docker}
	I1124 04:20:23.347224  502762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 04:20:23.373992  502762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 04:20:23.374154  502762 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 04:20:23.485742  502762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 04:20:24.251324  502762 node_ready.go:35] waiting up to 15m0s for node "auto-778509" to be "Ready" ...
	I1124 04:20:24.250368  502762 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1124 04:20:24.303673  502762 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1124 04:20:20.760224  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	W1124 04:20:22.760355  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	W1124 04:20:24.761632  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	I1124 04:20:24.307561  502762 addons.go:530] duration metric: took 1.434948386s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 04:20:24.757775  502762 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-778509" context rescaled to 1 replicas
	W1124 04:20:26.254915  502762 node_ready.go:57] node "auto-778509" has "Ready":"False" status (will retry)
	W1124 04:20:27.260440  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	W1124 04:20:29.759077  500996 pod_ready.go:104] pod "coredns-66bc5c9577-jtn7v" is not "Ready", error: <nil>
	I1124 04:20:30.260647  500996 pod_ready.go:94] pod "coredns-66bc5c9577-jtn7v" is "Ready"
	I1124 04:20:30.260679  500996 pod_ready.go:86] duration metric: took 36.506710943s for pod "coredns-66bc5c9577-jtn7v" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:20:30.263638  500996 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-303179" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:20:30.269443  500996 pod_ready.go:94] pod "etcd-default-k8s-diff-port-303179" is "Ready"
	I1124 04:20:30.269477  500996 pod_ready.go:86] duration metric: took 5.80752ms for pod "etcd-default-k8s-diff-port-303179" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:20:30.272217  500996 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-303179" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:20:30.277309  500996 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-303179" is "Ready"
	I1124 04:20:30.277336  500996 pod_ready.go:86] duration metric: took 5.090834ms for pod "kube-apiserver-default-k8s-diff-port-303179" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:20:30.284180  500996 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-303179" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:20:30.457131  500996 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-303179" is "Ready"
	I1124 04:20:30.457163  500996 pod_ready.go:86] duration metric: took 172.953995ms for pod "kube-controller-manager-default-k8s-diff-port-303179" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:20:30.657378  500996 pod_ready.go:83] waiting for pod "kube-proxy-dxbvb" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 04:20:28.255071  502762 node_ready.go:57] node "auto-778509" has "Ready":"False" status (will retry)
	W1124 04:20:30.256521  502762 node_ready.go:57] node "auto-778509" has "Ready":"False" status (will retry)
	I1124 04:20:31.057977  500996 pod_ready.go:94] pod "kube-proxy-dxbvb" is "Ready"
	I1124 04:20:31.058008  500996 pod_ready.go:86] duration metric: took 400.601728ms for pod "kube-proxy-dxbvb" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:20:31.257938  500996 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-303179" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:20:31.657599  500996 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-303179" is "Ready"
	I1124 04:20:31.657628  500996 pod_ready.go:86] duration metric: took 399.660628ms for pod "kube-scheduler-default-k8s-diff-port-303179" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 04:20:31.657640  500996 pod_ready.go:40] duration metric: took 37.909245818s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 04:20:31.716927  500996 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 04:20:31.719974  500996 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-303179" cluster and "default" namespace by default
	W1124 04:20:32.754699  502762 node_ready.go:57] node "auto-778509" has "Ready":"False" status (will retry)
	W1124 04:20:35.254390  502762 node_ready.go:57] node "auto-778509" has "Ready":"False" status (will retry)
	W1124 04:20:37.254838  502762 node_ready.go:57] node "auto-778509" has "Ready":"False" status (will retry)
	W1124 04:20:39.754997  502762 node_ready.go:57] node "auto-778509" has "Ready":"False" status (will retry)
	W1124 04:20:42.256249  502762 node_ready.go:57] node "auto-778509" has "Ready":"False" status (will retry)
	W1124 04:20:44.262853  502762 node_ready.go:57] node "auto-778509" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 24 04:20:23 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:23.445081105Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=43580074-e39a-438a-b48e-4292a44bcbf9 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 04:20:23 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:23.447002753Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=07a56214-1d3b-46ae-9cab-a547f3d97c7a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:20:23 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:23.447264549Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:20:23 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:23.463299604Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:20:23 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:23.465964694Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b839e0c3255dcb75c8c02ad5d11329c0e3fdf48ac24d2435c86966b58ea48f89/merged/etc/passwd: no such file or directory"
	Nov 24 04:20:23 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:23.466231069Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b839e0c3255dcb75c8c02ad5d11329c0e3fdf48ac24d2435c86966b58ea48f89/merged/etc/group: no such file or directory"
	Nov 24 04:20:23 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:23.469226099Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 04:20:23 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:23.498809615Z" level=info msg="Created container 1382fa66d6c4e982dc6b4a55d8edbbf187a34c265672f91b49f816a794745593: kube-system/storage-provisioner/storage-provisioner" id=07a56214-1d3b-46ae-9cab-a547f3d97c7a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 04:20:23 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:23.503666625Z" level=info msg="Starting container: 1382fa66d6c4e982dc6b4a55d8edbbf187a34c265672f91b49f816a794745593" id=7db78333-922c-46f0-9dfc-8aea29ddd4d1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 04:20:23 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:23.510065503Z" level=info msg="Started container" PID=1684 containerID=1382fa66d6c4e982dc6b4a55d8edbbf187a34c265672f91b49f816a794745593 description=kube-system/storage-provisioner/storage-provisioner id=7db78333-922c-46f0-9dfc-8aea29ddd4d1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2aec815fd912f6fb0f5f5c102ce3b4d6e6e7ad80053e12d9286f5454291f238d
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.845309143Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.850734307Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.850769475Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.850797618Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.85533167Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.855367872Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.855391347Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.859768335Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.859806046Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.859832705Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.864256897Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.864290259Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.86431828Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.868470042Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 04:20:32 default-k8s-diff-port-303179 crio[666]: time="2025-11-24T04:20:32.868506777Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	1382fa66d6c4e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           26 seconds ago       Running             storage-provisioner         2                   2aec815fd912f       storage-provisioner                                    kube-system
	71c2ac2a77db9       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           31 seconds ago       Exited              dashboard-metrics-scraper   2                   aaedd6482295c       dashboard-metrics-scraper-6ffb444bf9-pjsgd             kubernetes-dashboard
	e6e6068e06b19       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   41 seconds ago       Running             kubernetes-dashboard        0                   d8e8760eab9df       kubernetes-dashboard-855c9754f9-kxt5z                  kubernetes-dashboard
	94e4284acaaa6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           57 seconds ago       Running             coredns                     1                   17f79d28b9d8c       coredns-66bc5c9577-jtn7v                               kube-system
	02dae000866f7       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   bffa5e056a655       busybox                                                default
	8ea36127f1e2a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           57 seconds ago       Running             kube-proxy                  1                   0d11be28b3cd4       kube-proxy-dxbvb                                       kube-system
	77638956b16e6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           57 seconds ago       Running             kindnet-cni                 1                   7cf33f37c1ea0       kindnet-wpp6p                                          kube-system
	691c670515b36       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           57 seconds ago       Exited              storage-provisioner         1                   2aec815fd912f       storage-provisioner                                    kube-system
	e9dbfcfcc198e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   61108daff7cbf       kube-controller-manager-default-k8s-diff-port-303179   kube-system
	99ae33342ea98       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   500b20d5f5672       etcd-default-k8s-diff-port-303179                      kube-system
	5d9db75c10b00       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   ea00382c9be21       kube-scheduler-default-k8s-diff-port-303179            kube-system
	7b5099e4fd3c1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   c1f8f86e883fb       kube-apiserver-default-k8s-diff-port-303179            kube-system
	
	
	==> coredns [94e4284acaaa64a390d75968180bd4df33aa7cd9f0ad954d942f3d86db1a8dc9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59472 - 33126 "HINFO IN 1902876730966256139.252883644883131976. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.039823838s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-303179
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-303179
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=default-k8s-diff-port-303179
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T04_18_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 04:18:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-303179
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 04:20:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 04:20:22 +0000   Mon, 24 Nov 2025 04:18:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 04:20:22 +0000   Mon, 24 Nov 2025 04:18:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 04:20:22 +0000   Mon, 24 Nov 2025 04:18:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 04:20:22 +0000   Mon, 24 Nov 2025 04:19:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-303179
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 304a86241bf1bbb85bd31db5692386d7
	  System UUID:                0604e81b-b009-43d1-b54f-04b6a69cede9
	  Boot ID:                    e6ca431c-3a35-478f-87f6-f49cc4bc8a65
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-jtn7v                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m23s
	  kube-system                 etcd-default-k8s-diff-port-303179                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m30s
	  kube-system                 kindnet-wpp6p                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-303179             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-303179    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-proxy-dxbvb                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-303179             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-pjsgd              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-kxt5z                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m21s              kube-proxy       
	  Normal   Starting                 56s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m29s              kubelet          Node default-k8s-diff-port-303179 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m29s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m29s              kubelet          Node default-k8s-diff-port-303179 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m29s              kubelet          Node default-k8s-diff-port-303179 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m29s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m25s              node-controller  Node default-k8s-diff-port-303179 event: Registered Node default-k8s-diff-port-303179 in Controller
	  Normal   NodeReady                102s               kubelet          Node default-k8s-diff-port-303179 status is now: NodeReady
	  Normal   Starting                 67s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 67s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  66s (x8 over 66s)  kubelet          Node default-k8s-diff-port-303179 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    66s (x8 over 66s)  kubelet          Node default-k8s-diff-port-303179 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     66s (x8 over 66s)  kubelet          Node default-k8s-diff-port-303179 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                node-controller  Node default-k8s-diff-port-303179 event: Registered Node default-k8s-diff-port-303179 in Controller
	
	
	==> dmesg <==
	[Nov24 03:58] overlayfs: idmapped layers are currently not supported
	[ +17.825126] overlayfs: idmapped layers are currently not supported
	[Nov24 03:59] overlayfs: idmapped layers are currently not supported
	[ +28.164324] overlayfs: idmapped layers are currently not supported
	[Nov24 04:01] overlayfs: idmapped layers are currently not supported
	[Nov24 04:02] overlayfs: idmapped layers are currently not supported
	[Nov24 04:04] overlayfs: idmapped layers are currently not supported
	[Nov24 04:05] overlayfs: idmapped layers are currently not supported
	[Nov24 04:06] overlayfs: idmapped layers are currently not supported
	[Nov24 04:08] overlayfs: idmapped layers are currently not supported
	[Nov24 04:10] overlayfs: idmapped layers are currently not supported
	[Nov24 04:11] overlayfs: idmapped layers are currently not supported
	[ +23.918932] overlayfs: idmapped layers are currently not supported
	[Nov24 04:12] overlayfs: idmapped layers are currently not supported
	[ +35.202347] overlayfs: idmapped layers are currently not supported
	[Nov24 04:13] overlayfs: idmapped layers are currently not supported
	[Nov24 04:15] overlayfs: idmapped layers are currently not supported
	[ +47.476343] overlayfs: idmapped layers are currently not supported
	[Nov24 04:16] overlayfs: idmapped layers are currently not supported
	[Nov24 04:17] overlayfs: idmapped layers are currently not supported
	[Nov24 04:18] overlayfs: idmapped layers are currently not supported
	[ +43.060353] overlayfs: idmapped layers are currently not supported
	[Nov24 04:19] overlayfs: idmapped layers are currently not supported
	[ +19.472739] overlayfs: idmapped layers are currently not supported
	[Nov24 04:20] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [99ae33342ea980542a5d01e94e4e877f3a9a7f61e7804bf44fc417104b2c8f75] <==
	{"level":"warn","ts":"2025-11-24T04:19:50.002267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.017629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.040916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.067384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.077608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.088852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.106560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.122294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.137717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.161012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.183974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.197523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.219208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.236051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.255613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.272740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.285437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.301403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.316090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.331510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.350790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.373269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.390994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.404788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T04:19:50.474529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37240","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 04:20:50 up  3:02,  0 user,  load average: 4.27, 3.74, 3.08
	Linux default-k8s-diff-port-303179 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [77638956b16e6865b98ae01afe0403153860b4c41ae3b6f7f1ca46f8dbd2a939] <==
	I1124 04:19:52.640848       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 04:19:52.718874       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 04:19:52.719061       1 main.go:148] setting mtu 1500 for CNI 
	I1124 04:19:52.719101       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 04:19:52.719140       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T04:19:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 04:19:52.845628       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 04:19:52.845729       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 04:19:52.845764       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 04:19:52.846217       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 04:20:22.844346       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 04:20:22.845669       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 04:20:22.846976       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 04:20:22.915353       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1124 04:20:24.146745       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 04:20:24.146781       1 metrics.go:72] Registering metrics
	I1124 04:20:24.146848       1 controller.go:711] "Syncing nftables rules"
	I1124 04:20:32.844992       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 04:20:32.845040       1 main.go:301] handling current node
	I1124 04:20:42.850811       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 04:20:42.850845       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7b5099e4fd3c18fc391ec751d92268e0f783642d7729eae47a7899934d2bf05a] <==
	I1124 04:19:51.694822       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 04:19:51.694941       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 04:19:51.704751       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 04:19:51.706787       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 04:19:51.706971       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 04:19:51.707005       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 04:19:51.707071       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 04:19:51.707718       1 aggregator.go:171] initial CRD sync complete...
	I1124 04:19:51.712603       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 04:19:51.712679       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 04:19:51.712710       1 cache.go:39] Caches are synced for autoregister controller
	I1124 04:19:51.713698       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 04:19:51.754181       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1124 04:19:51.784232       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 04:19:52.096824       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 04:19:52.219085       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 04:19:52.550233       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 04:19:52.718083       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 04:19:52.844846       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 04:19:52.866250       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 04:19:53.008145       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.114.160"}
	I1124 04:19:53.029558       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.184.179"}
	I1124 04:19:55.389749       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 04:19:55.439424       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 04:19:55.493967       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [e9dbfcfcc198e21983dc97bf121184dd3db9248de5fd970ee04f8ed5f32a25ed] <==
	I1124 04:19:54.970890       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 04:19:54.981178       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 04:19:54.982435       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 04:19:54.982578       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 04:19:54.982604       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 04:19:54.982893       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 04:19:54.983054       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 04:19:54.983078       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 04:19:54.983182       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 04:19:54.983219       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 04:19:54.983248       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 04:19:54.983453       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 04:19:54.989373       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 04:19:54.996806       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 04:19:54.997019       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 04:19:55.012960       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 04:19:55.032685       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 04:19:55.032913       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 04:19:55.033029       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-303179"
	I1124 04:19:55.033102       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 04:19:55.032805       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 04:19:55.034562       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 04:19:55.037182       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 04:19:55.038588       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 04:19:55.045766       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [8ea36127f1e2a736a6fa13fdcd1a92bbbae15e2705dd51f2675d3440113d8abb] <==
	I1124 04:19:52.807285       1 server_linux.go:53] "Using iptables proxy"
	I1124 04:19:52.948943       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 04:19:53.050073       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 04:19:53.064936       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 04:19:53.065035       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 04:19:53.203615       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 04:19:53.203749       1 server_linux.go:132] "Using iptables Proxier"
	I1124 04:19:53.219337       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 04:19:53.219732       1 server.go:527] "Version info" version="v1.34.1"
	I1124 04:19:53.219927       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:19:53.221187       1 config.go:200] "Starting service config controller"
	I1124 04:19:53.221247       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 04:19:53.221286       1 config.go:106] "Starting endpoint slice config controller"
	I1124 04:19:53.221312       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 04:19:53.221348       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 04:19:53.221375       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 04:19:53.222681       1 config.go:309] "Starting node config controller"
	I1124 04:19:53.223335       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 04:19:53.223393       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 04:19:53.324149       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 04:19:53.324186       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 04:19:53.324227       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5d9db75c10b0014f3fe772d0746170c1ac112901a7f81fceee9ad108d08be4d4] <==
	I1124 04:19:48.785375       1 serving.go:386] Generated self-signed cert in-memory
	W1124 04:19:51.263490       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 04:19:51.263618       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 04:19:51.263654       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 04:19:51.263702       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 04:19:51.557059       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 04:19:51.570827       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 04:19:51.589123       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 04:19:51.589372       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 04:19:51.589397       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 04:19:51.589561       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 04:19:51.789807       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 04:19:55 default-k8s-diff-port-303179 kubelet[795]: I1124 04:19:55.804609     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwlh4\" (UniqueName: \"kubernetes.io/projected/f27d2dc7-02aa-4c7f-ad0d-2780a4cbead8-kube-api-access-bwlh4\") pod \"kubernetes-dashboard-855c9754f9-kxt5z\" (UID: \"f27d2dc7-02aa-4c7f-ad0d-2780a4cbead8\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kxt5z"
	Nov 24 04:19:55 default-k8s-diff-port-303179 kubelet[795]: W1124 04:19:55.968721     795 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c6af048d3f8eed722eea7223c0aae47b7c545b8922e6d139f9580c8eda525748/crio-aaedd6482295c07e2f424254ad1aba39e814cf703ff571493c23bbe504f0cccc WatchSource:0}: Error finding container aaedd6482295c07e2f424254ad1aba39e814cf703ff571493c23bbe504f0cccc: Status 404 returned error can't find the container with id aaedd6482295c07e2f424254ad1aba39e814cf703ff571493c23bbe504f0cccc
	Nov 24 04:19:56 default-k8s-diff-port-303179 kubelet[795]: W1124 04:19:56.002300     795 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c6af048d3f8eed722eea7223c0aae47b7c545b8922e6d139f9580c8eda525748/crio-d8e8760eab9dff2ed38768db4d1595e78b3b65123e0aeae4cd4ed0354afa3376 WatchSource:0}: Error finding container d8e8760eab9dff2ed38768db4d1595e78b3b65123e0aeae4cd4ed0354afa3376: Status 404 returned error can't find the container with id d8e8760eab9dff2ed38768db4d1595e78b3b65123e0aeae4cd4ed0354afa3376
	Nov 24 04:20:00 default-k8s-diff-port-303179 kubelet[795]: I1124 04:20:00.004372     795 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 24 04:20:02 default-k8s-diff-port-303179 kubelet[795]: I1124 04:20:02.369792     795 scope.go:117] "RemoveContainer" containerID="5c338d457c4d322764fcae234383de9faa199487d56e0765c922c16fcbbc7240"
	Nov 24 04:20:03 default-k8s-diff-port-303179 kubelet[795]: I1124 04:20:03.375754     795 scope.go:117] "RemoveContainer" containerID="5c338d457c4d322764fcae234383de9faa199487d56e0765c922c16fcbbc7240"
	Nov 24 04:20:03 default-k8s-diff-port-303179 kubelet[795]: I1124 04:20:03.376050     795 scope.go:117] "RemoveContainer" containerID="9010f62448852361cf7013ab7f56db36f153af69befb8b8430e0af6aea19cdee"
	Nov 24 04:20:03 default-k8s-diff-port-303179 kubelet[795]: E1124 04:20:03.376201     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pjsgd_kubernetes-dashboard(4e184ed9-95b6-40f3-a516-b9ab36a8e5f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pjsgd" podUID="4e184ed9-95b6-40f3-a516-b9ab36a8e5f5"
	Nov 24 04:20:04 default-k8s-diff-port-303179 kubelet[795]: I1124 04:20:04.381088     795 scope.go:117] "RemoveContainer" containerID="9010f62448852361cf7013ab7f56db36f153af69befb8b8430e0af6aea19cdee"
	Nov 24 04:20:04 default-k8s-diff-port-303179 kubelet[795]: E1124 04:20:04.381862     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pjsgd_kubernetes-dashboard(4e184ed9-95b6-40f3-a516-b9ab36a8e5f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pjsgd" podUID="4e184ed9-95b6-40f3-a516-b9ab36a8e5f5"
	Nov 24 04:20:05 default-k8s-diff-port-303179 kubelet[795]: I1124 04:20:05.925503     795 scope.go:117] "RemoveContainer" containerID="9010f62448852361cf7013ab7f56db36f153af69befb8b8430e0af6aea19cdee"
	Nov 24 04:20:05 default-k8s-diff-port-303179 kubelet[795]: E1124 04:20:05.925705     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pjsgd_kubernetes-dashboard(4e184ed9-95b6-40f3-a516-b9ab36a8e5f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pjsgd" podUID="4e184ed9-95b6-40f3-a516-b9ab36a8e5f5"
	Nov 24 04:20:18 default-k8s-diff-port-303179 kubelet[795]: I1124 04:20:18.154879     795 scope.go:117] "RemoveContainer" containerID="9010f62448852361cf7013ab7f56db36f153af69befb8b8430e0af6aea19cdee"
	Nov 24 04:20:18 default-k8s-diff-port-303179 kubelet[795]: I1124 04:20:18.426077     795 scope.go:117] "RemoveContainer" containerID="9010f62448852361cf7013ab7f56db36f153af69befb8b8430e0af6aea19cdee"
	Nov 24 04:20:18 default-k8s-diff-port-303179 kubelet[795]: I1124 04:20:18.426636     795 scope.go:117] "RemoveContainer" containerID="71c2ac2a77db96c635e8a7e09623f36fea99c4831f268be3cc0d4dd5cdcaa5d4"
	Nov 24 04:20:18 default-k8s-diff-port-303179 kubelet[795]: E1124 04:20:18.427218     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pjsgd_kubernetes-dashboard(4e184ed9-95b6-40f3-a516-b9ab36a8e5f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pjsgd" podUID="4e184ed9-95b6-40f3-a516-b9ab36a8e5f5"
	Nov 24 04:20:18 default-k8s-diff-port-303179 kubelet[795]: I1124 04:20:18.472046     795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kxt5z" podStartSLOduration=10.658353632 podStartE2EDuration="23.472024344s" podCreationTimestamp="2025-11-24 04:19:55 +0000 UTC" firstStartedPulling="2025-11-24 04:19:56.010347835 +0000 UTC m=+12.214416731" lastFinishedPulling="2025-11-24 04:20:08.824018547 +0000 UTC m=+25.028087443" observedRunningTime="2025-11-24 04:20:09.432057745 +0000 UTC m=+25.636126641" watchObservedRunningTime="2025-11-24 04:20:18.472024344 +0000 UTC m=+34.676093248"
	Nov 24 04:20:23 default-k8s-diff-port-303179 kubelet[795]: I1124 04:20:23.441449     795 scope.go:117] "RemoveContainer" containerID="691c670515b3649d1af8f8fc34e9afc633ea3c1168b0515b53808ecd55c01c47"
	Nov 24 04:20:25 default-k8s-diff-port-303179 kubelet[795]: I1124 04:20:25.926007     795 scope.go:117] "RemoveContainer" containerID="71c2ac2a77db96c635e8a7e09623f36fea99c4831f268be3cc0d4dd5cdcaa5d4"
	Nov 24 04:20:25 default-k8s-diff-port-303179 kubelet[795]: E1124 04:20:25.926234     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pjsgd_kubernetes-dashboard(4e184ed9-95b6-40f3-a516-b9ab36a8e5f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pjsgd" podUID="4e184ed9-95b6-40f3-a516-b9ab36a8e5f5"
	Nov 24 04:20:38 default-k8s-diff-port-303179 kubelet[795]: I1124 04:20:38.155192     795 scope.go:117] "RemoveContainer" containerID="71c2ac2a77db96c635e8a7e09623f36fea99c4831f268be3cc0d4dd5cdcaa5d4"
	Nov 24 04:20:38 default-k8s-diff-port-303179 kubelet[795]: E1124 04:20:38.155387     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pjsgd_kubernetes-dashboard(4e184ed9-95b6-40f3-a516-b9ab36a8e5f5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pjsgd" podUID="4e184ed9-95b6-40f3-a516-b9ab36a8e5f5"
	Nov 24 04:20:45 default-k8s-diff-port-303179 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 04:20:45 default-k8s-diff-port-303179 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 04:20:45 default-k8s-diff-port-303179 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [e6e6068e06b191282d1459f1f15294aab079f0847575d0a311c401d7cff667c8] <==
	2025/11/24 04:20:08 Using namespace: kubernetes-dashboard
	2025/11/24 04:20:08 Using in-cluster config to connect to apiserver
	2025/11/24 04:20:08 Using secret token for csrf signing
	2025/11/24 04:20:08 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 04:20:08 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 04:20:08 Successful initial request to the apiserver, version: v1.34.1
	2025/11/24 04:20:08 Generating JWE encryption key
	2025/11/24 04:20:08 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 04:20:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 04:20:09 Initializing JWE encryption key from synchronized object
	2025/11/24 04:20:09 Creating in-cluster Sidecar client
	2025/11/24 04:20:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 04:20:09 Serving insecurely on HTTP port: 9090
	2025/11/24 04:20:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 04:20:08 Starting overwatch
	
	
	==> storage-provisioner [1382fa66d6c4e982dc6b4a55d8edbbf187a34c265672f91b49f816a794745593] <==
	I1124 04:20:23.540437       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 04:20:23.557226       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 04:20:23.560199       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 04:20:23.564892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:20:27.020777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:20:31.280819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:20:34.879888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:20:37.933301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:20:40.956281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:20:40.964588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 04:20:40.964783       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 04:20:40.964990       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-303179_89b97a19-a91c-429c-9576-765a8ccc9830!
	I1124 04:20:40.965790       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"09f66fd8-db14-4a17-8771-4d111bed13aa", APIVersion:"v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-303179_89b97a19-a91c-429c-9576-765a8ccc9830 became leader
	W1124 04:20:40.969590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:20:40.975186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 04:20:41.065932       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-303179_89b97a19-a91c-429c-9576-765a8ccc9830!
	W1124 04:20:42.978608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:20:42.989015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:20:44.992796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:20:44.998959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:20:47.002662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:20:47.009667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:20:49.012903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 04:20:49.018139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [691c670515b3649d1af8f8fc34e9afc633ea3c1168b0515b53808ecd55c01c47] <==
	I1124 04:19:52.653994       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 04:20:22.657101       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-303179 -n default-k8s-diff-port-303179
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-303179 -n default-k8s-diff-port-303179: exit status 2 (386.631628ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-303179 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.52s)
E1124 04:26:27.751259  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:26:38.019417  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (261/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 35.57
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 25.75
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.62
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
27 TestAddons/Setup 159.55
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 9.78
48 TestAddons/StoppedEnableDisable 12.44
49 TestCertOptions 38.4
50 TestCertExpiration 248.7
52 TestForceSystemdFlag 48.15
53 TestForceSystemdEnv 45.28
58 TestErrorSpam/setup 33.13
59 TestErrorSpam/start 0.87
60 TestErrorSpam/status 1.09
61 TestErrorSpam/pause 5.31
62 TestErrorSpam/unpause 5.39
63 TestErrorSpam/stop 1.51
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 79.18
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 25.63
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.56
75 TestFunctional/serial/CacheCmd/cache/add_local 1.2
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.89
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 31.18
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.46
86 TestFunctional/serial/LogsFileCmd 1.48
87 TestFunctional/serial/InvalidService 4.14
89 TestFunctional/parallel/ConfigCmd 0.46
90 TestFunctional/parallel/DashboardCmd 6.87
91 TestFunctional/parallel/DryRun 0.45
92 TestFunctional/parallel/InternationalLanguage 0.2
93 TestFunctional/parallel/StatusCmd 1.06
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 23.34
101 TestFunctional/parallel/SSHCmd 0.55
102 TestFunctional/parallel/CpCmd 2.04
104 TestFunctional/parallel/FileSync 0.31
105 TestFunctional/parallel/CertSync 2.31
109 TestFunctional/parallel/NodeLabels 0.12
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.74
113 TestFunctional/parallel/License 0.4
114 TestFunctional/parallel/Version/short 0.08
115 TestFunctional/parallel/Version/components 0.87
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
120 TestFunctional/parallel/ImageCommands/ImageBuild 4.03
121 TestFunctional/parallel/ImageCommands/Setup 0.63
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.53
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.32
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
144 TestFunctional/parallel/ServiceCmd/List 0.54
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
150 TestFunctional/parallel/ProfileCmd/profile_list 0.42
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
152 TestFunctional/parallel/MountCmd/any-port 6.67
153 TestFunctional/parallel/MountCmd/specific-port 1.89
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.8
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 211.9
163 TestMultiControlPlane/serial/DeployApp 37.48
164 TestMultiControlPlane/serial/PingHostFromPods 1.49
165 TestMultiControlPlane/serial/AddWorkerNode 57.82
166 TestMultiControlPlane/serial/NodeLabels 0.11
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.08
168 TestMultiControlPlane/serial/CopyFile 20.14
169 TestMultiControlPlane/serial/StopSecondaryNode 12.81
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.81
171 TestMultiControlPlane/serial/RestartSecondaryNode 32.52
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.42
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 118.32
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.84
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.78
176 TestMultiControlPlane/serial/StopCluster 25.49
177 TestMultiControlPlane/serial/RestartCluster 95.66
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.81
179 TestMultiControlPlane/serial/AddSecondaryNode 55.33
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.08
185 TestJSONOutput/start/Command 80.08
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.83
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.23
210 TestKicCustomNetwork/create_custom_network 56.81
211 TestKicCustomNetwork/use_default_bridge_network 37.46
212 TestKicExistingNetwork 37.14
213 TestKicCustomSubnet 36.01
214 TestKicStaticIP 38.63
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 72.88
219 TestMountStart/serial/StartWithMountFirst 8.75
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 8.94
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.7
224 TestMountStart/serial/VerifyMountPostDelete 0.29
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 8.3
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 139.66
231 TestMultiNode/serial/DeployApp2Nodes 4.84
232 TestMultiNode/serial/PingHostFrom2Pods 0.93
233 TestMultiNode/serial/AddNode 58.29
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.73
236 TestMultiNode/serial/CopyFile 10.91
237 TestMultiNode/serial/StopNode 2.46
238 TestMultiNode/serial/StartAfterStop 8.5
239 TestMultiNode/serial/RestartKeepsNodes 71.95
240 TestMultiNode/serial/DeleteNode 5.68
241 TestMultiNode/serial/StopMultiNode 24.12
242 TestMultiNode/serial/RestartMultiNode 58.1
243 TestMultiNode/serial/ValidateNameConflict 37.47
248 TestPreload 127.04
250 TestScheduledStopUnix 109.9
253 TestInsufficientStorage 13.36
254 TestRunningBinaryUpgrade 54.34
256 TestKubernetesUpgrade 366.73
257 TestMissingContainerUpgrade 127.81
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
260 TestNoKubernetes/serial/StartWithK8s 46.63
261 TestNoKubernetes/serial/StartWithStopK8s 15.36
262 TestNoKubernetes/serial/Start 9.26
263 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0.01
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
265 TestNoKubernetes/serial/ProfileList 0.7
266 TestNoKubernetes/serial/Stop 1.3
267 TestNoKubernetes/serial/StartNoArgs 7.09
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
269 TestStoppedBinaryUpgrade/Setup 0.8
270 TestStoppedBinaryUpgrade/Upgrade 57.98
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.15
280 TestPause/serial/Start 82.5
281 TestPause/serial/SecondStartNoReconfiguration 29.16
290 TestNetworkPlugins/group/false 4.01
295 TestStartStop/group/old-k8s-version/serial/FirstStart 61.25
296 TestStartStop/group/old-k8s-version/serial/DeployApp 10.43
298 TestStartStop/group/old-k8s-version/serial/Stop 12.06
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
300 TestStartStop/group/old-k8s-version/serial/SecondStart 50.42
301 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
302 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
303 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
306 TestStartStop/group/no-preload/serial/FirstStart 75.88
308 TestStartStop/group/embed-certs/serial/FirstStart 86.24
309 TestStartStop/group/no-preload/serial/DeployApp 10.32
311 TestStartStop/group/no-preload/serial/Stop 12.04
312 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
313 TestStartStop/group/no-preload/serial/SecondStart 53.64
314 TestStartStop/group/embed-certs/serial/DeployApp 9.5
316 TestStartStop/group/embed-certs/serial/Stop 12.28
317 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
318 TestStartStop/group/embed-certs/serial/SecondStart 54.87
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
320 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.18
321 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.34
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 81.21
325 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
326 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
327 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.31
330 TestStartStop/group/newest-cni/serial/FirstStart 39.09
331 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.4
332 TestStartStop/group/newest-cni/serial/DeployApp 0
334 TestStartStop/group/newest-cni/serial/Stop 1.34
335 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
336 TestStartStop/group/newest-cni/serial/SecondStart 15.76
338 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.3
339 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
343 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
344 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 56.48
345 TestNetworkPlugins/group/auto/Start 84.91
346 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
347 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.13
348 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
350 TestNetworkPlugins/group/kindnet/Start 80.8
351 TestNetworkPlugins/group/auto/KubeletFlags 0.37
352 TestNetworkPlugins/group/auto/NetCatPod 12.36
353 TestNetworkPlugins/group/auto/DNS 0.26
354 TestNetworkPlugins/group/auto/Localhost 0.19
355 TestNetworkPlugins/group/auto/HairPin 0.21
356 TestNetworkPlugins/group/calico/Start 59.66
357 TestNetworkPlugins/group/kindnet/ControllerPod 6
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
359 TestNetworkPlugins/group/kindnet/NetCatPod 14.35
360 TestNetworkPlugins/group/kindnet/DNS 0.33
361 TestNetworkPlugins/group/kindnet/Localhost 0.18
362 TestNetworkPlugins/group/kindnet/HairPin 0.27
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/calico/KubeletFlags 0.43
365 TestNetworkPlugins/group/calico/NetCatPod 12.43
366 TestNetworkPlugins/group/custom-flannel/Start 65.71
367 TestNetworkPlugins/group/calico/DNS 0.23
368 TestNetworkPlugins/group/calico/Localhost 0.2
369 TestNetworkPlugins/group/calico/HairPin 0.18
370 TestNetworkPlugins/group/enable-default-cni/Start 80.24
371 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.38
372 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.37
373 TestNetworkPlugins/group/custom-flannel/DNS 0.15
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
376 TestNetworkPlugins/group/flannel/Start 64.97
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.38
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
382 TestNetworkPlugins/group/bridge/Start 74.87
383 TestNetworkPlugins/group/flannel/ControllerPod 6.01
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.39
385 TestNetworkPlugins/group/flannel/NetCatPod 12.33
386 TestNetworkPlugins/group/flannel/DNS 0.17
387 TestNetworkPlugins/group/flannel/Localhost 0.13
388 TestNetworkPlugins/group/flannel/HairPin 0.14
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
390 TestNetworkPlugins/group/bridge/NetCatPod 9.26
391 TestNetworkPlugins/group/bridge/DNS 0.15
392 TestNetworkPlugins/group/bridge/Localhost 0.13
393 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (35.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-738458 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-738458 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (35.57230513s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (35.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1124 03:13:01.314712  291389 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1124 03:13:01.314791  291389 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-738458
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-738458: exit status 85 (90.349808ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-738458 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-738458 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:12:25
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:12:25.787174  291395 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:12:25.787325  291395 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:12:25.787339  291395 out.go:374] Setting ErrFile to fd 2...
	I1124 03:12:25.787361  291395 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:12:25.787751  291395 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	W1124 03:12:25.787964  291395 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21975-289526/.minikube/config/config.json: open /home/jenkins/minikube-integration/21975-289526/.minikube/config/config.json: no such file or directory
	I1124 03:12:25.788449  291395 out.go:368] Setting JSON to true
	I1124 03:12:25.789339  291395 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6875,"bootTime":1763947071,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 03:12:25.789438  291395 start.go:143] virtualization:  
	I1124 03:12:25.795012  291395 out.go:99] [download-only-738458] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1124 03:12:25.795366  291395 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball: no such file or directory
	I1124 03:12:25.795430  291395 notify.go:221] Checking for updates...
	I1124 03:12:25.798798  291395 out.go:171] MINIKUBE_LOCATION=21975
	I1124 03:12:25.802577  291395 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:12:25.805996  291395 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 03:12:25.809294  291395 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	I1124 03:12:25.812587  291395 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1124 03:12:25.818749  291395 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 03:12:25.819050  291395 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:12:25.844559  291395 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 03:12:25.844703  291395 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:12:25.900086  291395 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-24 03:12:25.890802976 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:12:25.900194  291395 docker.go:319] overlay module found
	I1124 03:12:25.903371  291395 out.go:99] Using the docker driver based on user configuration
	I1124 03:12:25.903421  291395 start.go:309] selected driver: docker
	I1124 03:12:25.903429  291395 start.go:927] validating driver "docker" against <nil>
	I1124 03:12:25.903548  291395 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:12:25.959658  291395 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-24 03:12:25.950142281 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:12:25.959826  291395 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 03:12:25.960116  291395 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1124 03:12:25.960276  291395 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 03:12:25.963595  291395 out.go:171] Using Docker driver with root privileges
	I1124 03:12:25.966871  291395 cni.go:84] Creating CNI manager for ""
	I1124 03:12:25.966968  291395 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:25.966981  291395 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 03:12:25.967075  291395 start.go:353] cluster config:
	{Name:download-only-738458 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-738458 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:12:25.970267  291395 out.go:99] Starting "download-only-738458" primary control-plane node in "download-only-738458" cluster
	I1124 03:12:25.970299  291395 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 03:12:25.973349  291395 out.go:99] Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:12:25.973427  291395 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 03:12:25.973484  291395 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:12:25.989501  291395 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 to local cache
	I1124 03:12:25.989692  291395 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local cache directory
	I1124 03:12:25.989781  291395 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 to local cache
	I1124 03:12:26.026753  291395 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1124 03:12:26.026783  291395 cache.go:65] Caching tarball of preloaded images
	I1124 03:12:26.026993  291395 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 03:12:26.030379  291395 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1124 03:12:26.030418  291395 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1124 03:12:26.128231  291395 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1124 03:12:26.128371  291395 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1124 03:12:34.108636  291395 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 as a tarball
	I1124 03:13:00.328478  291395 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1124 03:13:00.328916  291395 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/download-only-738458/config.json ...
	I1124 03:13:00.328956  291395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/download-only-738458/config.json: {Name:mk8c18b5b4bb5f10ef12e975748956b9dce03530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:00.329199  291395 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 03:13:00.329446  291395 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21975-289526/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-738458 host does not exist
	  To start a cluster, run: "minikube start -p download-only-738458"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-738458
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (25.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-946785 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-946785 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (25.754595868s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (25.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1124 03:13:27.524905  291389 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1124 03:13:27.524941  291389 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-946785
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-946785: exit status 85 (94.139381ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-738458 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-738458 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ delete  │ -p download-only-738458                                                                                                                                                   │ download-only-738458 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ start   │ -o=json --download-only -p download-only-946785 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-946785 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:13:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:13:01.814272  291592 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:13:01.815038  291592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:13:01.815080  291592 out.go:374] Setting ErrFile to fd 2...
	I1124 03:13:01.815101  291592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:13:01.815437  291592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 03:13:01.815904  291592 out.go:368] Setting JSON to true
	I1124 03:13:01.816814  291592 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6911,"bootTime":1763947071,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 03:13:01.816907  291592 start.go:143] virtualization:  
	I1124 03:13:01.820341  291592 out.go:99] [download-only-946785] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 03:13:01.820538  291592 notify.go:221] Checking for updates...
	I1124 03:13:01.823427  291592 out.go:171] MINIKUBE_LOCATION=21975
	I1124 03:13:01.826499  291592 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:13:01.829478  291592 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 03:13:01.832324  291592 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	I1124 03:13:01.835207  291592 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1124 03:13:01.841141  291592 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 03:13:01.841417  291592 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:13:01.873220  291592 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 03:13:01.873346  291592 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:13:01.939399  291592 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-24 03:13:01.93017146 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:13:01.939506  291592 docker.go:319] overlay module found
	I1124 03:13:01.942399  291592 out.go:99] Using the docker driver based on user configuration
	I1124 03:13:01.942431  291592 start.go:309] selected driver: docker
	I1124 03:13:01.942437  291592 start.go:927] validating driver "docker" against <nil>
	I1124 03:13:01.942568  291592 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:13:01.996867  291592 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-24 03:13:01.987297436 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:13:01.997039  291592 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 03:13:01.997360  291592 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1124 03:13:01.997522  291592 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 03:13:02.011322  291592 out.go:171] Using Docker driver with root privileges
	I1124 03:13:02.014318  291592 cni.go:84] Creating CNI manager for ""
	I1124 03:13:02.014407  291592 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:13:02.014421  291592 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 03:13:02.014634  291592 start.go:353] cluster config:
	{Name:download-only-946785 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-946785 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:13:02.017784  291592 out.go:99] Starting "download-only-946785" primary control-plane node in "download-only-946785" cluster
	I1124 03:13:02.017825  291592 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 03:13:02.020720  291592 out.go:99] Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:13:02.020788  291592 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:13:02.020841  291592 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:13:02.036954  291592 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 to local cache
	I1124 03:13:02.037249  291592 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local cache directory
	I1124 03:13:02.037275  291592 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local cache directory, skipping pull
	I1124 03:13:02.037316  291592 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in cache, skipping pull
	I1124 03:13:02.037329  291592 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 as a tarball
	I1124 03:13:02.097242  291592 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 03:13:02.097277  291592 cache.go:65] Caching tarball of preloaded images
	I1124 03:13:02.097466  291592 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:13:02.100564  291592 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1124 03:13:02.100599  291592 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1124 03:13:02.183278  291592 preload.go:295] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1124 03:13:02.183337  291592 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21975-289526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 03:13:26.488182  291592 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 03:13:26.488620  291592 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/download-only-946785/config.json ...
	I1124 03:13:26.488669  291592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/download-only-946785/config.json: {Name:mka3ee450f74cb07e6b276b003c19f902666032f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:26.488867  291592 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:13:26.489028  291592 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21975-289526/.minikube/cache/linux/arm64/v1.34.1/kubectl
	
	
	* The control-plane node download-only-946785 host does not exist
	  To start a cluster, run: "minikube start -p download-only-946785"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-946785
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I1124 03:13:28.678754  291389 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-193578 --alsologtostderr --binary-mirror http://127.0.0.1:46679 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-193578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-193578
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-153780
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-153780: exit status 85 (85.969405ms)

                                                
                                                
-- stdout --
	* Profile "addons-153780" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-153780"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-153780
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-153780: exit status 85 (88.74336ms)

                                                
                                                
-- stdout --
	* Profile "addons-153780" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-153780"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (159.55s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-153780 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-153780 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m39.551736964s)
--- PASS: TestAddons/Setup (159.55s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-153780 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-153780 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.78s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-153780 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-153780 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3ebeb4a9-62ee-4b41-b435-c7a83e16dd93] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3ebeb4a9-62ee-4b41-b435-c7a83e16dd93] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003242909s
addons_test.go:694: (dbg) Run:  kubectl --context addons-153780 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-153780 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-153780 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-153780 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.78s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.44s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-153780
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-153780: (12.153444314s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-153780
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-153780
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-153780
--- PASS: TestAddons/StoppedEnableDisable (12.44s)

                                                
                                    
x
+
TestCertOptions (38.4s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-967682 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-967682 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (35.57823917s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-967682 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-967682 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-967682 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-967682" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-967682
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-967682: (2.068172278s)
--- PASS: TestCertOptions (38.40s)

                                                
                                    
x
+
TestCertExpiration (248.7s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-918798 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1124 04:11:27.462994  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/functional-666975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-918798 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (42.541593841s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-918798 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-918798 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (22.261327252s)
helpers_test.go:175: Cleaning up "cert-expiration-918798" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-918798
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-918798: (3.898030859s)
--- PASS: TestCertExpiration (248.70s)

                                                
                                    
x
+
TestForceSystemdFlag (48.15s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-499579 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-499579 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (44.443198414s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-499579 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-499579" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-499579
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-499579: (3.149922285s)
--- PASS: TestForceSystemdFlag (48.15s)

                                                
                                    
x
+
TestForceSystemdEnv (45.28s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-400958 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1124 04:10:52.920998  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:11:09.842566  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-400958 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.541394275s)
helpers_test.go:175: Cleaning up "force-systemd-env-400958" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-400958
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-400958: (2.740899256s)
--- PASS: TestForceSystemdEnv (45.28s)

                                                
                                    
x
+
TestErrorSpam/setup (33.13s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-565305 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-565305 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-565305 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-565305 --driver=docker  --container-runtime=crio: (33.126801342s)
--- PASS: TestErrorSpam/setup (33.13s)

                                                
                                    
x
+
TestErrorSpam/start (0.87s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-565305 --log_dir /tmp/nospam-565305 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-565305 --log_dir /tmp/nospam-565305 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-565305 --log_dir /tmp/nospam-565305 start --dry-run
--- PASS: TestErrorSpam/start (0.87s)

                                                
                                    
x
+
TestErrorSpam/status (1.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-565305 --log_dir /tmp/nospam-565305 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-565305 --log_dir /tmp/nospam-565305 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-565305 --log_dir /tmp/nospam-565305 status
--- PASS: TestErrorSpam/status (1.09s)

                                                
                                    
x
+
TestErrorSpam/pause (5.31s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-565305 --log_dir /tmp/nospam-565305 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-565305 --log_dir /tmp/nospam-565305 pause: exit status 80 (1.584610947s)

                                                
                                                
-- stdout --
	* Pausing node nospam-565305 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:20:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-565305 --log_dir /tmp/nospam-565305 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-565305 --log_dir /tmp/nospam-565305 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-565305 --log_dir /tmp/nospam-565305 pause: exit status 80 (1.681378867s)

                                                
                                                
-- stdout --
	* Pausing node nospam-565305 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:20:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-565305 --log_dir /tmp/nospam-565305 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-565305 --log_dir /tmp/nospam-565305 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-565305 --log_dir /tmp/nospam-565305 pause: exit status 80 (2.045942802s)

                                                
                                                
-- stdout --
	* Pausing node nospam-565305 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:20:36Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-565305 --log_dir /tmp/nospam-565305 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.31s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.39s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-565305 --log_dir /tmp/nospam-565305 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-565305 --log_dir /tmp/nospam-565305 unpause: exit status 80 (1.613948131s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-565305 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:20:38Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-565305 --log_dir /tmp/nospam-565305 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-565305 --log_dir /tmp/nospam-565305 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-565305 --log_dir /tmp/nospam-565305 unpause: exit status 80 (1.572268305s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-565305 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:20:40Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-565305 --log_dir /tmp/nospam-565305 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-565305 --log_dir /tmp/nospam-565305 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-565305 --log_dir /tmp/nospam-565305 unpause: exit status 80 (2.203011786s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-565305 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:20:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-565305 --log_dir /tmp/nospam-565305 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.39s)

                                                
                                    
x
+
TestErrorSpam/stop (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-565305 --log_dir /tmp/nospam-565305 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-565305 --log_dir /tmp/nospam-565305 stop: (1.30962308s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-565305 --log_dir /tmp/nospam-565305 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-565305 --log_dir /tmp/nospam-565305 stop
--- PASS: TestErrorSpam/stop (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21975-289526/.minikube/files/etc/test/nested/copy/291389/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.18s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-666975 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1124 03:21:09.847626  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:21:09.854031  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:21:09.865438  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:21:09.886901  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:21:09.928296  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:21:10.013529  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:21:10.175318  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:21:10.496702  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:21:11.138687  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:21:12.420012  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:21:14.982584  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:21:20.104800  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:21:30.346199  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:21:50.827705  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-666975 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m19.179531215s)
--- PASS: TestFunctional/serial/StartWithProxy (79.18s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (25.63s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1124 03:22:07.737676  291389 config.go:182] Loaded profile config "functional-666975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-666975 --alsologtostderr -v=8
E1124 03:22:31.789167  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-666975 --alsologtostderr -v=8: (25.62948337s)
functional_test.go:678: soft start took 25.63001144s for "functional-666975" cluster.
I1124 03:22:33.367441  291389 config.go:182] Loaded profile config "functional-666975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (25.63s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-666975 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-666975 cache add registry.k8s.io/pause:3.1: (1.211228403s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-666975 cache add registry.k8s.io/pause:3.3: (1.222600965s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-666975 cache add registry.k8s.io/pause:latest: (1.122521553s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-666975 /tmp/TestFunctionalserialCacheCmdcacheadd_local3834211237/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 cache add minikube-local-cache-test:functional-666975
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 cache delete minikube-local-cache-test:functional-666975
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-666975
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-666975 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (286.803549ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 kubectl -- --context functional-666975 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-666975 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-666975 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-666975 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.172134171s)
functional_test.go:776: restart took 31.172218742s for "functional-666975" cluster.
I1124 03:23:12.138728  291389 config.go:182] Loaded profile config "functional-666975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (31.18s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-666975 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-666975 logs: (1.463595741s)
--- PASS: TestFunctional/serial/LogsCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 logs --file /tmp/TestFunctionalserialLogsFileCmd4171901477/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-666975 logs --file /tmp/TestFunctionalserialLogsFileCmd4171901477/001/logs.txt: (1.483212987s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.14s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-666975 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-666975
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-666975: exit status 115 (387.495319ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30114 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-666975 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.14s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-666975 config get cpus: exit status 14 (78.088392ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-666975 config get cpus: exit status 14 (75.77936ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-666975 --alsologtostderr -v=1]
2025/11/24 03:33:49 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-666975 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 319240: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.87s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-666975 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-666975 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (195.575481ms)

                                                
                                                
-- stdout --
	* [functional-666975] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:33:42.724703  318946 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:33:42.724914  318946 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:33:42.724941  318946 out.go:374] Setting ErrFile to fd 2...
	I1124 03:33:42.724961  318946 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:33:42.725286  318946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 03:33:42.725789  318946 out.go:368] Setting JSON to false
	I1124 03:33:42.726767  318946 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8152,"bootTime":1763947071,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 03:33:42.726866  318946 start.go:143] virtualization:  
	I1124 03:33:42.730793  318946 out.go:179] * [functional-666975] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 03:33:42.733982  318946 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:33:42.734063  318946 notify.go:221] Checking for updates...
	I1124 03:33:42.737935  318946 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:33:42.740839  318946 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 03:33:42.743820  318946 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	I1124 03:33:42.746823  318946 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 03:33:42.749782  318946 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:33:42.753215  318946 config.go:182] Loaded profile config "functional-666975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:33:42.753840  318946 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:33:42.783243  318946 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 03:33:42.783362  318946 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:33:42.843039  318946 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 03:33:42.833401037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:33:42.843134  318946 docker.go:319] overlay module found
	I1124 03:33:42.846158  318946 out.go:179] * Using the docker driver based on existing profile
	I1124 03:33:42.848940  318946 start.go:309] selected driver: docker
	I1124 03:33:42.848960  318946 start.go:927] validating driver "docker" against &{Name:functional-666975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-666975 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:33:42.849059  318946 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:33:42.852694  318946 out.go:203] 
	W1124 03:33:42.855508  318946 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1124 03:33:42.858410  318946 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-666975 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-666975 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-666975 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (199.429439ms)

                                                
                                                
-- stdout --
	* [functional-666975] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:33:43.172665  319065 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:33:43.172808  319065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:33:43.172820  319065 out.go:374] Setting ErrFile to fd 2...
	I1124 03:33:43.172825  319065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:33:43.173205  319065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 03:33:43.173647  319065 out.go:368] Setting JSON to false
	I1124 03:33:43.174584  319065 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8153,"bootTime":1763947071,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 03:33:43.174664  319065 start.go:143] virtualization:  
	I1124 03:33:43.177734  319065 out.go:179] * [functional-666975] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1124 03:33:43.181388  319065 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:33:43.181501  319065 notify.go:221] Checking for updates...
	I1124 03:33:43.187224  319065 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:33:43.190103  319065 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 03:33:43.192852  319065 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	I1124 03:33:43.195667  319065 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 03:33:43.198567  319065 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:33:43.201868  319065 config.go:182] Loaded profile config "functional-666975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:33:43.202436  319065 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:33:43.234532  319065 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 03:33:43.234640  319065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:33:43.296589  319065 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 03:33:43.287291778 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:33:43.296704  319065 docker.go:319] overlay module found
	I1124 03:33:43.299923  319065 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1124 03:33:43.302838  319065 start.go:309] selected driver: docker
	I1124 03:33:43.302859  319065 start.go:927] validating driver "docker" against &{Name:functional-666975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-666975 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:33:43.302958  319065 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:33:43.306576  319065 out.go:203] 
	W1124 03:33:43.309375  319065 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1124 03:33:43.312152  319065 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [eb2993ca-d9b6-4f1f-871a-ee8435c519d8] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004108827s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-666975 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-666975 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-666975 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-666975 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [bcad6f36-eddf-4cb7-bb8e-0cb22e8c0d5e] Pending
helpers_test.go:352: "sp-pod" [bcad6f36-eddf-4cb7-bb8e-0cb22e8c0d5e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [bcad6f36-eddf-4cb7-bb8e-0cb22e8c0d5e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.00391841s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-666975 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-666975 delete -f testdata/storage-provisioner/pod.yaml
E1124 03:23:53.710884  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-666975 delete -f testdata/storage-provisioner/pod.yaml: (1.213333019s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-666975 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [ac0298d0-e014-44ae-890a-ae8b65c341fa] Pending
helpers_test.go:352: "sp-pod" [ac0298d0-e014-44ae-890a-ae8b65c341fa] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.00734461s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-666975 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.34s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh -n functional-666975 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 cp functional-666975:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1767530491/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh -n functional-666975 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh -n functional-666975 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/291389/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh "sudo cat /etc/test/nested/copy/291389/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/291389.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh "sudo cat /etc/ssl/certs/291389.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/291389.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh "sudo cat /usr/share/ca-certificates/291389.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2913892.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh "sudo cat /etc/ssl/certs/2913892.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2913892.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh "sudo cat /usr/share/ca-certificates/2913892.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-666975 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-666975 ssh "sudo systemctl is-active docker": exit status 1 (345.251314ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-666975 ssh "sudo systemctl is-active containerd": exit status 1 (391.790356ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-666975 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-666975 image ls --format short --alsologtostderr:
I1124 03:33:51.639237  319618 out.go:360] Setting OutFile to fd 1 ...
I1124 03:33:51.639485  319618 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 03:33:51.639513  319618 out.go:374] Setting ErrFile to fd 2...
I1124 03:33:51.639534  319618 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 03:33:51.639907  319618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
I1124 03:33:51.640698  319618 config.go:182] Loaded profile config "functional-666975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 03:33:51.640886  319618 config.go:182] Loaded profile config "functional-666975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 03:33:51.641524  319618 cli_runner.go:164] Run: docker container inspect functional-666975 --format={{.State.Status}}
I1124 03:33:51.658806  319618 ssh_runner.go:195] Run: systemctl --version
I1124 03:33:51.658860  319618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-666975
I1124 03:33:51.676567  319618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/functional-666975/id_rsa Username:docker}
I1124 03:33:51.782072  319618 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-666975 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
│ localhost/my-image                      │ functional-666975  │ 1a0fef9c2f20e │ 1.64MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ docker.io/library/nginx                 │ latest             │ bb747ca923a5e │ 176MB  │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-666975 image ls --format table --alsologtostderr:
I1124 03:33:56.371542  320096 out.go:360] Setting OutFile to fd 1 ...
I1124 03:33:56.371659  320096 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 03:33:56.371664  320096 out.go:374] Setting ErrFile to fd 2...
I1124 03:33:56.371669  320096 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 03:33:56.372052  320096 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
I1124 03:33:56.373108  320096 config.go:182] Loaded profile config "functional-666975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 03:33:56.373249  320096 config.go:182] Loaded profile config "functional-666975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 03:33:56.373843  320096 cli_runner.go:164] Run: docker container inspect functional-666975 --format={{.State.Status}}
I1124 03:33:56.392335  320096 ssh_runner.go:195] Run: systemctl --version
I1124 03:33:56.392400  320096 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-666975
I1124 03:33:56.411190  320096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/functional-666975/id_rsa Username:docker}
I1124 03:33:56.513042  320096 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-666975 image ls --format json --alsologtostderr:
[{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23e
af60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54837949"},{"id":"bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/ngin
x@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712"],"repoTags":["docker.io/library/nginx:latest"],"size":"175943180"},{"id":"1a0fef9c2f20eadbb97348cda1b56bd00b71a448d93ab3a250ca27ba0f2460de","repoDigests":["localhost/my-image@sha256:d15fa9f96333fdb1673341f505b1610c55b42b74363150c5b103201c5269720f"],"repoTags":["localhost/my-image:functional-666975"],"size":"1640791"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"8cb2091f603e75187e2f
6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"a870d2f352ab1bc29a32df48f00809b0a8ece3d5e280916aeb4ba50b923b2904","repoDigests":["docker.io/library/6505c18da4d52e980192e3d48f41a59a7847124f970e7918ee83e992b18797cc-tmp@sha256:455f366ad08775c4f7aaa7310ce42a071441afa026952fdf106458dc74061d5c"],"repoTags":[],"size":"1638179"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b
0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metr
ics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-666975 image ls --format json --alsologtostderr:
I1124 03:33:56.137667  320057 out.go:360] Setting OutFile to fd 1 ...
I1124 03:33:56.137818  320057 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 03:33:56.137828  320057 out.go:374] Setting ErrFile to fd 2...
I1124 03:33:56.137833  320057 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 03:33:56.138164  320057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
I1124 03:33:56.138963  320057 config.go:182] Loaded profile config "functional-666975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 03:33:56.139195  320057 config.go:182] Loaded profile config "functional-666975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 03:33:56.139791  320057 cli_runner.go:164] Run: docker container inspect functional-666975 --format={{.State.Status}}
I1124 03:33:56.157348  320057 ssh_runner.go:195] Run: systemctl --version
I1124 03:33:56.157410  320057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-666975
I1124 03:33:56.175374  320057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/functional-666975/id_rsa Username:docker}
I1124 03:33:56.280844  320057 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-666975 image ls --format yaml --alsologtostderr:
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712
repoTags:
- docker.io/library/nginx:latest
size: "175943180"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-666975 image ls --format yaml --alsologtostderr:
I1124 03:33:51.871468  319654 out.go:360] Setting OutFile to fd 1 ...
I1124 03:33:51.871618  319654 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 03:33:51.871644  319654 out.go:374] Setting ErrFile to fd 2...
I1124 03:33:51.871661  319654 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 03:33:51.871939  319654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
I1124 03:33:51.872637  319654 config.go:182] Loaded profile config "functional-666975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 03:33:51.872796  319654 config.go:182] Loaded profile config "functional-666975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 03:33:51.873380  319654 cli_runner.go:164] Run: docker container inspect functional-666975 --format={{.State.Status}}
I1124 03:33:51.890705  319654 ssh_runner.go:195] Run: systemctl --version
I1124 03:33:51.890774  319654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-666975
I1124 03:33:51.908836  319654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/functional-666975/id_rsa Username:docker}
I1124 03:33:52.013502  319654 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-666975 ssh pgrep buildkitd: exit status 1 (291.55123ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 image build -t localhost/my-image:functional-666975 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-666975 image build -t localhost/my-image:functional-666975 testdata/build --alsologtostderr: (3.502843959s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-666975 image build -t localhost/my-image:functional-666975 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> a870d2f352a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-666975
--> 1a0fef9c2f2
Successfully tagged localhost/my-image:functional-666975
1a0fef9c2f20eadbb97348cda1b56bd00b71a448d93ab3a250ca27ba0f2460de
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-666975 image build -t localhost/my-image:functional-666975 testdata/build --alsologtostderr:
I1124 03:33:52.390069  319752 out.go:360] Setting OutFile to fd 1 ...
I1124 03:33:52.390917  319752 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 03:33:52.390963  319752 out.go:374] Setting ErrFile to fd 2...
I1124 03:33:52.390985  319752 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 03:33:52.391275  319752 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
I1124 03:33:52.391955  319752 config.go:182] Loaded profile config "functional-666975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 03:33:52.392753  319752 config.go:182] Loaded profile config "functional-666975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 03:33:52.393333  319752 cli_runner.go:164] Run: docker container inspect functional-666975 --format={{.State.Status}}
I1124 03:33:52.411677  319752 ssh_runner.go:195] Run: systemctl --version
I1124 03:33:52.411747  319752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-666975
I1124 03:33:52.429143  319752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/functional-666975/id_rsa Username:docker}
I1124 03:33:52.533214  319752 build_images.go:162] Building image from path: /tmp/build.975749409.tar
I1124 03:33:52.533296  319752 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1124 03:33:52.541174  319752 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.975749409.tar
I1124 03:33:52.545047  319752 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.975749409.tar: stat -c "%s %y" /var/lib/minikube/build/build.975749409.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.975749409.tar': No such file or directory
I1124 03:33:52.545079  319752 ssh_runner.go:362] scp /tmp/build.975749409.tar --> /var/lib/minikube/build/build.975749409.tar (3072 bytes)
I1124 03:33:52.564717  319752 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.975749409
I1124 03:33:52.579922  319752 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.975749409 -xf /var/lib/minikube/build/build.975749409.tar
I1124 03:33:52.588337  319752 crio.go:315] Building image: /var/lib/minikube/build/build.975749409
I1124 03:33:52.588412  319752 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-666975 /var/lib/minikube/build/build.975749409 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1124 03:33:55.818663  319752 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-666975 /var/lib/minikube/build/build.975749409 --cgroup-manager=cgroupfs: (3.230214652s)
I1124 03:33:55.818739  319752 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.975749409
I1124 03:33:55.827282  319752 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.975749409.tar
I1124 03:33:55.834705  319752 build_images.go:218] Built localhost/my-image:functional-666975 from /tmp/build.975749409.tar
I1124 03:33:55.834741  319752 build_images.go:134] succeeded building to: functional-666975
I1124 03:33:55.834746  319752 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-666975
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 image rm kicbase/echo-server:functional-666975 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-666975 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-666975 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-666975 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 315255: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-666975 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-666975 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-666975 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [bf12344d-3588-45ff-823e-5598cbbdb760] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [bf12344d-3588-45ff-823e-5598cbbdb760] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004056619s
I1124 03:23:37.052180  291389 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-666975 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.63.49 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-666975 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 service list -o json
functional_test.go:1504: Took "525.456245ms" to run "out/minikube-linux-arm64 -p functional-666975 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "369.127406ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "54.920888ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "359.453605ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "52.376233ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-666975 /tmp/TestFunctionalparallelMountCmdany-port3414578058/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763955212316832404" to /tmp/TestFunctionalparallelMountCmdany-port3414578058/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763955212316832404" to /tmp/TestFunctionalparallelMountCmdany-port3414578058/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763955212316832404" to /tmp/TestFunctionalparallelMountCmdany-port3414578058/001/test-1763955212316832404
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-666975 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (353.919083ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 03:33:32.671023  291389 retry.go:31] will retry after 261.093374ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 24 03:33 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 24 03:33 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 24 03:33 test-1763955212316832404
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh cat /mount-9p/test-1763955212316832404
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-666975 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [d4245ccc-74e9-4044-afcc-43d4ed5ce425] Pending
helpers_test.go:352: "busybox-mount" [d4245ccc-74e9-4044-afcc-43d4ed5ce425] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [d4245ccc-74e9-4044-afcc-43d4ed5ce425] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [d4245ccc-74e9-4044-afcc-43d4ed5ce425] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004027757s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-666975 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-666975 /tmp/TestFunctionalparallelMountCmdany-port3414578058/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.67s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-666975 /tmp/TestFunctionalparallelMountCmdspecific-port852473531/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-666975 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (384.939279ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 03:33:39.374848  291389 retry.go:31] will retry after 420.124545ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-666975 /tmp/TestFunctionalparallelMountCmdspecific-port852473531/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-666975 ssh "sudo umount -f /mount-9p": exit status 1 (294.398171ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-666975 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-666975 /tmp/TestFunctionalparallelMountCmdspecific-port852473531/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-666975 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1827520769/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-666975 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1827520769/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-666975 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1827520769/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-666975 ssh "findmnt -T" /mount1: exit status 1 (591.656109ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 03:33:41.470845  291389 retry.go:31] will retry after 254.290227ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-666975 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-666975 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-666975 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1827520769/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-666975 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1827520769/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-666975 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1827520769/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.80s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-666975
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-666975
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-666975
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (211.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1124 03:36:09.843047  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:37:32.915034  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-273960 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m31.027673567s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (211.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (37.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-273960 kubectl -- rollout status deployment/busybox: (34.807092331s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 kubectl -- exec busybox-7b57f96db7-79hbr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 kubectl -- exec busybox-7b57f96db7-f4rrj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 kubectl -- exec busybox-7b57f96db7-txhzq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 kubectl -- exec busybox-7b57f96db7-79hbr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 kubectl -- exec busybox-7b57f96db7-f4rrj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 kubectl -- exec busybox-7b57f96db7-txhzq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 kubectl -- exec busybox-7b57f96db7-79hbr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 kubectl -- exec busybox-7b57f96db7-f4rrj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 kubectl -- exec busybox-7b57f96db7-txhzq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (37.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 kubectl -- exec busybox-7b57f96db7-79hbr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 kubectl -- exec busybox-7b57f96db7-79hbr -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 kubectl -- exec busybox-7b57f96db7-f4rrj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 kubectl -- exec busybox-7b57f96db7-f4rrj -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 kubectl -- exec busybox-7b57f96db7-txhzq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 kubectl -- exec busybox-7b57f96db7-txhzq -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 node add --alsologtostderr -v 5
E1124 03:38:24.396743  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/functional-666975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:38:24.403087  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/functional-666975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:38:24.414444  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/functional-666975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:38:24.435868  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/functional-666975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:38:24.477204  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/functional-666975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:38:24.558587  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/functional-666975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:38:24.719853  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/functional-666975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:38:25.041240  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/functional-666975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:38:25.682901  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/functional-666975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:38:26.965035  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/functional-666975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:38:29.526921  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/functional-666975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:38:34.649007  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/functional-666975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:38:44.890604  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/functional-666975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:39:05.372084  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/functional-666975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-273960 node add --alsologtostderr -v 5: (56.774435687s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-273960 status --alsologtostderr -v 5: (1.046836164s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-273960 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.080180147s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-273960 status --output json --alsologtostderr -v 5: (1.018403814s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 cp testdata/cp-test.txt ha-273960:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 cp ha-273960:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3523766684/001/cp-test_ha-273960.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 cp ha-273960:/home/docker/cp-test.txt ha-273960-m02:/home/docker/cp-test_ha-273960_ha-273960-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960-m02 "sudo cat /home/docker/cp-test_ha-273960_ha-273960-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 cp ha-273960:/home/docker/cp-test.txt ha-273960-m03:/home/docker/cp-test_ha-273960_ha-273960-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960-m03 "sudo cat /home/docker/cp-test_ha-273960_ha-273960-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 cp ha-273960:/home/docker/cp-test.txt ha-273960-m04:/home/docker/cp-test_ha-273960_ha-273960-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960-m04 "sudo cat /home/docker/cp-test_ha-273960_ha-273960-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 cp testdata/cp-test.txt ha-273960-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 cp ha-273960-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3523766684/001/cp-test_ha-273960-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 cp ha-273960-m02:/home/docker/cp-test.txt ha-273960:/home/docker/cp-test_ha-273960-m02_ha-273960.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960 "sudo cat /home/docker/cp-test_ha-273960-m02_ha-273960.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 cp ha-273960-m02:/home/docker/cp-test.txt ha-273960-m03:/home/docker/cp-test_ha-273960-m02_ha-273960-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960-m03 "sudo cat /home/docker/cp-test_ha-273960-m02_ha-273960-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 cp ha-273960-m02:/home/docker/cp-test.txt ha-273960-m04:/home/docker/cp-test_ha-273960-m02_ha-273960-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960-m04 "sudo cat /home/docker/cp-test_ha-273960-m02_ha-273960-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 cp testdata/cp-test.txt ha-273960-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 cp ha-273960-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3523766684/001/cp-test_ha-273960-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 cp ha-273960-m03:/home/docker/cp-test.txt ha-273960:/home/docker/cp-test_ha-273960-m03_ha-273960.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960 "sudo cat /home/docker/cp-test_ha-273960-m03_ha-273960.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 cp ha-273960-m03:/home/docker/cp-test.txt ha-273960-m02:/home/docker/cp-test_ha-273960-m03_ha-273960-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960-m02 "sudo cat /home/docker/cp-test_ha-273960-m03_ha-273960-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 cp ha-273960-m03:/home/docker/cp-test.txt ha-273960-m04:/home/docker/cp-test_ha-273960-m03_ha-273960-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960-m04 "sudo cat /home/docker/cp-test_ha-273960-m03_ha-273960-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 cp testdata/cp-test.txt ha-273960-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 cp ha-273960-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3523766684/001/cp-test_ha-273960-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 cp ha-273960-m04:/home/docker/cp-test.txt ha-273960:/home/docker/cp-test_ha-273960-m04_ha-273960.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960 "sudo cat /home/docker/cp-test_ha-273960-m04_ha-273960.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 cp ha-273960-m04:/home/docker/cp-test.txt ha-273960-m02:/home/docker/cp-test_ha-273960-m04_ha-273960-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960-m02 "sudo cat /home/docker/cp-test_ha-273960-m04_ha-273960-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 cp ha-273960-m04:/home/docker/cp-test.txt ha-273960-m03:/home/docker/cp-test_ha-273960-m04_ha-273960-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 ssh -n ha-273960-m03 "sudo cat /home/docker/cp-test_ha-273960-m04_ha-273960-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 node stop m02 --alsologtostderr -v 5
E1124 03:39:46.335045  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/functional-666975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-273960 node stop m02 --alsologtostderr -v 5: (12.028436005s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-273960 status --alsologtostderr -v 5: exit status 7 (779.593517ms)

                                                
                                                
-- stdout --
	ha-273960
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-273960-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-273960-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-273960-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:39:49.191959  335319 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:39:49.192087  335319 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:39:49.192103  335319 out.go:374] Setting ErrFile to fd 2...
	I1124 03:39:49.192109  335319 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:39:49.192358  335319 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 03:39:49.192580  335319 out.go:368] Setting JSON to false
	I1124 03:39:49.192621  335319 mustload.go:66] Loading cluster: ha-273960
	I1124 03:39:49.192695  335319 notify.go:221] Checking for updates...
	I1124 03:39:49.193744  335319 config.go:182] Loaded profile config "ha-273960": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:39:49.193769  335319 status.go:174] checking status of ha-273960 ...
	I1124 03:39:49.194435  335319 cli_runner.go:164] Run: docker container inspect ha-273960 --format={{.State.Status}}
	I1124 03:39:49.215579  335319 status.go:371] ha-273960 host status = "Running" (err=<nil>)
	I1124 03:39:49.215604  335319 host.go:66] Checking if "ha-273960" exists ...
	I1124 03:39:49.215912  335319 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-273960
	I1124 03:39:49.248124  335319 host.go:66] Checking if "ha-273960" exists ...
	I1124 03:39:49.248435  335319 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:39:49.248488  335319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-273960
	I1124 03:39:49.278572  335319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/ha-273960/id_rsa Username:docker}
	I1124 03:39:49.380230  335319 ssh_runner.go:195] Run: systemctl --version
	I1124 03:39:49.386997  335319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:39:49.400422  335319 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:39:49.458851  335319 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-24 03:39:49.448778898 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:39:49.459425  335319 kubeconfig.go:125] found "ha-273960" server: "https://192.168.49.254:8443"
	I1124 03:39:49.459459  335319 api_server.go:166] Checking apiserver status ...
	I1124 03:39:49.459513  335319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:39:49.471011  335319 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1235/cgroup
	I1124 03:39:49.479184  335319 api_server.go:182] apiserver freezer: "11:freezer:/docker/0c82dad6b3ca5d01613fb684f014a3503e08ce78667b834ffdb623033c66b694/crio/crio-2df3f5425dcee495f075a717a01993b2e679d1557b4118b5f3bd2a2a478df3a9"
	I1124 03:39:49.479266  335319 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0c82dad6b3ca5d01613fb684f014a3503e08ce78667b834ffdb623033c66b694/crio/crio-2df3f5425dcee495f075a717a01993b2e679d1557b4118b5f3bd2a2a478df3a9/freezer.state
	I1124 03:39:49.487837  335319 api_server.go:204] freezer state: "THAWED"
	I1124 03:39:49.487870  335319 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1124 03:39:49.495989  335319 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1124 03:39:49.496020  335319 status.go:463] ha-273960 apiserver status = Running (err=<nil>)
	I1124 03:39:49.496032  335319 status.go:176] ha-273960 status: &{Name:ha-273960 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 03:39:49.496077  335319 status.go:174] checking status of ha-273960-m02 ...
	I1124 03:39:49.496401  335319 cli_runner.go:164] Run: docker container inspect ha-273960-m02 --format={{.State.Status}}
	I1124 03:39:49.514080  335319 status.go:371] ha-273960-m02 host status = "Stopped" (err=<nil>)
	I1124 03:39:49.514101  335319 status.go:384] host is not running, skipping remaining checks
	I1124 03:39:49.514108  335319 status.go:176] ha-273960-m02 status: &{Name:ha-273960-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 03:39:49.514135  335319 status.go:174] checking status of ha-273960-m03 ...
	I1124 03:39:49.514495  335319 cli_runner.go:164] Run: docker container inspect ha-273960-m03 --format={{.State.Status}}
	I1124 03:39:49.534054  335319 status.go:371] ha-273960-m03 host status = "Running" (err=<nil>)
	I1124 03:39:49.534090  335319 host.go:66] Checking if "ha-273960-m03" exists ...
	I1124 03:39:49.534663  335319 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-273960-m03
	I1124 03:39:49.551759  335319 host.go:66] Checking if "ha-273960-m03" exists ...
	I1124 03:39:49.552084  335319 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:39:49.552128  335319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-273960-m03
	I1124 03:39:49.570332  335319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/ha-273960-m03/id_rsa Username:docker}
	I1124 03:39:49.676351  335319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:39:49.689808  335319 kubeconfig.go:125] found "ha-273960" server: "https://192.168.49.254:8443"
	I1124 03:39:49.689836  335319 api_server.go:166] Checking apiserver status ...
	I1124 03:39:49.689900  335319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:39:49.700991  335319 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1211/cgroup
	I1124 03:39:49.713213  335319 api_server.go:182] apiserver freezer: "11:freezer:/docker/6a1aa0c727d44032f9849bb74766870b5e67e3808284627b7d23fa9a8630e03e/crio/crio-7e427fb4c7c834358fae94e1206a16ca4150f2d235deceb689fad85bc38eadf8"
	I1124 03:39:49.713288  335319 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6a1aa0c727d44032f9849bb74766870b5e67e3808284627b7d23fa9a8630e03e/crio/crio-7e427fb4c7c834358fae94e1206a16ca4150f2d235deceb689fad85bc38eadf8/freezer.state
	I1124 03:39:49.722282  335319 api_server.go:204] freezer state: "THAWED"
	I1124 03:39:49.722360  335319 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1124 03:39:49.730852  335319 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1124 03:39:49.730882  335319 status.go:463] ha-273960-m03 apiserver status = Running (err=<nil>)
	I1124 03:39:49.730892  335319 status.go:176] ha-273960-m03 status: &{Name:ha-273960-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 03:39:49.730908  335319 status.go:174] checking status of ha-273960-m04 ...
	I1124 03:39:49.731216  335319 cli_runner.go:164] Run: docker container inspect ha-273960-m04 --format={{.State.Status}}
	I1124 03:39:49.749179  335319 status.go:371] ha-273960-m04 host status = "Running" (err=<nil>)
	I1124 03:39:49.749206  335319 host.go:66] Checking if "ha-273960-m04" exists ...
	I1124 03:39:49.749504  335319 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-273960-m04
	I1124 03:39:49.765727  335319 host.go:66] Checking if "ha-273960-m04" exists ...
	I1124 03:39:49.766062  335319 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:39:49.766108  335319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-273960-m04
	I1124 03:39:49.783120  335319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/ha-273960-m04/id_rsa Username:docker}
	I1124 03:39:49.887684  335319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:39:49.904824  335319 status.go:176] ha-273960-m04 status: &{Name:ha-273960-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (32.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-273960 node start m02 --alsologtostderr -v 5: (31.067343209s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-273960 status --alsologtostderr -v 5: (1.313300659s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (32.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.419099056s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (118.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-273960 stop --alsologtostderr -v 5: (31.715924732s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 start --wait true --alsologtostderr -v 5
E1124 03:41:08.258709  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/functional-666975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:41:09.842291  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-273960 start --wait true --alsologtostderr -v 5: (1m26.407830291s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (118.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-273960 node delete m03 --alsologtostderr -v 5: (10.864750401s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (25.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-273960 stop --alsologtostderr -v 5: (25.371356284s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-273960 status --alsologtostderr -v 5: exit status 7 (118.220938ms)

                                                
                                                
-- stdout --
	ha-273960
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-273960-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-273960-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:43:01.027480  347566 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:43:01.027662  347566 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:43:01.027672  347566 out.go:374] Setting ErrFile to fd 2...
	I1124 03:43:01.027678  347566 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:43:01.027921  347566 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 03:43:01.028104  347566 out.go:368] Setting JSON to false
	I1124 03:43:01.028136  347566 mustload.go:66] Loading cluster: ha-273960
	I1124 03:43:01.028195  347566 notify.go:221] Checking for updates...
	I1124 03:43:01.028557  347566 config.go:182] Loaded profile config "ha-273960": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:43:01.028581  347566 status.go:174] checking status of ha-273960 ...
	I1124 03:43:01.029147  347566 cli_runner.go:164] Run: docker container inspect ha-273960 --format={{.State.Status}}
	I1124 03:43:01.048603  347566 status.go:371] ha-273960 host status = "Stopped" (err=<nil>)
	I1124 03:43:01.048626  347566 status.go:384] host is not running, skipping remaining checks
	I1124 03:43:01.048635  347566 status.go:176] ha-273960 status: &{Name:ha-273960 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 03:43:01.048665  347566 status.go:174] checking status of ha-273960-m02 ...
	I1124 03:43:01.048967  347566 cli_runner.go:164] Run: docker container inspect ha-273960-m02 --format={{.State.Status}}
	I1124 03:43:01.078012  347566 status.go:371] ha-273960-m02 host status = "Stopped" (err=<nil>)
	I1124 03:43:01.078036  347566 status.go:384] host is not running, skipping remaining checks
	I1124 03:43:01.078055  347566 status.go:176] ha-273960-m02 status: &{Name:ha-273960-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 03:43:01.078077  347566 status.go:174] checking status of ha-273960-m04 ...
	I1124 03:43:01.078366  347566 cli_runner.go:164] Run: docker container inspect ha-273960-m04 --format={{.State.Status}}
	I1124 03:43:01.096202  347566 status.go:371] ha-273960-m04 host status = "Stopped" (err=<nil>)
	I1124 03:43:01.096223  347566 status.go:384] host is not running, skipping remaining checks
	I1124 03:43:01.096229  347566 status.go:176] ha-273960-m04 status: &{Name:ha-273960-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (25.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (95.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1124 03:43:24.397788  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/functional-666975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:43:52.100003  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/functional-666975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-273960 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m34.692681353s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (95.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (55.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-273960 node add --control-plane --alsologtostderr -v 5: (54.275581877s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-273960 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-273960 status --alsologtostderr -v 5: (1.052532804s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (55.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.08298543s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.08s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-901209 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1124 03:46:09.844811  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-901209 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m20.071460692s)
--- PASS: TestJSONOutput/start/Command (80.08s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.83s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-901209 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-901209 --output=json --user=testUser: (5.831545804s)
--- PASS: TestJSONOutput/stop/Command (5.83s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-971472 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-971472 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (94.389702ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b7b0365c-80bc-4e1e-a8ed-9f11c7fae69c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-971472] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"53d62fb5-7197-40c5-a448-007c57f90ccf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21975"}}
	{"specversion":"1.0","id":"b2e89cc7-d4c0-4860-9097-5163458b597d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"04700e57-f62e-490c-881b-a805d9505757","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig"}}
	{"specversion":"1.0","id":"f8452f67-82fe-406b-80a5-29189c7829c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube"}}
	{"specversion":"1.0","id":"da45efe2-72ad-4166-976a-229ab317400e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"9ad82655-35ce-4484-9daf-296ba3ee6406","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ddb89f1c-7bce-40a9-8f11-8955000bf8fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-971472" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-971472
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (56.81s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-931069 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-931069 --network=: (54.528561415s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-931069" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-931069
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-931069: (2.24876063s)
--- PASS: TestKicCustomNetwork/create_custom_network (56.81s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.46s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-128719 --network=bridge
E1124 03:48:24.398622  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/functional-666975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-128719 --network=bridge: (35.286296986s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-128719" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-128719
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-128719: (2.146993895s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.46s)

                                                
                                    
x
+
TestKicExistingNetwork (37.14s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1124 03:48:51.088195  291389 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1124 03:48:51.104258  291389 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1124 03:48:51.104344  291389 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1124 03:48:51.104362  291389 cli_runner.go:164] Run: docker network inspect existing-network
W1124 03:48:51.124694  291389 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1124 03:48:51.124724  291389 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1124 03:48:51.124745  291389 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1124 03:48:51.124868  291389 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1124 03:48:51.145629  291389 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-740fb099fccc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:52:7a:9c:b0:4d:41} reservation:<nil>}
I1124 03:48:51.146084  291389 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a791d0}
I1124 03:48:51.146123  291389 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1124 03:48:51.146182  291389 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1124 03:48:51.212264  291389 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-292916 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-292916 --network=existing-network: (34.840404596s)
helpers_test.go:175: Cleaning up "existing-network-292916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-292916
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-292916: (2.146562905s)
I1124 03:49:28.215218  291389 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (37.14s)

                                                
                                    
x
+
TestKicCustomSubnet (36.01s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-016891 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-016891 --subnet=192.168.60.0/24: (33.781803404s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-016891 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-016891" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-016891
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-016891: (2.193626858s)
--- PASS: TestKicCustomSubnet (36.01s)

                                                
                                    
x
+
TestKicStaticIP (38.63s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-331398 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-331398 --static-ip=192.168.200.200: (36.123376681s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-331398 ip
helpers_test.go:175: Cleaning up "static-ip-331398" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-331398
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-331398: (2.350874482s)
--- PASS: TestKicStaticIP (38.63s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (72.88s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-756076 --driver=docker  --container-runtime=crio
E1124 03:51:09.846611  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-756076 --driver=docker  --container-runtime=crio: (34.5772663s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-758833 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-758833 --driver=docker  --container-runtime=crio: (32.66664444s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-756076
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-758833
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-758833" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-758833
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-758833: (2.085236846s)
helpers_test.go:175: Cleaning up "first-756076" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-756076
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-756076: (2.0736323s)
--- PASS: TestMinikubeProfile (72.88s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.75s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-937414 --memory=3072 --mount-string /tmp/TestMountStartserial2957210330/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-937414 --memory=3072 --mount-string /tmp/TestMountStartserial2957210330/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.7444401s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-937414 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-939564 --memory=3072 --mount-string /tmp/TestMountStartserial2957210330/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-939564 --memory=3072 --mount-string /tmp/TestMountStartserial2957210330/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.939334509s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-939564 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-937414 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-937414 --alsologtostderr -v=5: (1.699383442s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-939564 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-939564
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-939564: (1.29101951s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.3s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-939564
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-939564: (7.299755552s)
--- PASS: TestMountStart/serial/RestartStopped (8.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-939564 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (139.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-476309 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1124 03:53:24.396794  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/functional-666975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:54:12.917923  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-476309 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m19.123882299s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (139.66s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476309 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
E1124 03:54:47.461674  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/functional-666975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476309 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-476309 -- rollout status deployment/busybox: (3.108591425s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476309 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476309 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476309 -- exec busybox-7b57f96db7-fszt9 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476309 -- exec busybox-7b57f96db7-pqh2r -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476309 -- exec busybox-7b57f96db7-fszt9 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476309 -- exec busybox-7b57f96db7-pqh2r -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476309 -- exec busybox-7b57f96db7-fszt9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476309 -- exec busybox-7b57f96db7-pqh2r -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.84s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476309 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476309 -- exec busybox-7b57f96db7-fszt9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476309 -- exec busybox-7b57f96db7-fszt9 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476309 -- exec busybox-7b57f96db7-pqh2r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-476309 -- exec busybox-7b57f96db7-pqh2r -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-476309 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-476309 -v=5 --alsologtostderr: (57.594898028s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.29s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-476309 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 cp testdata/cp-test.txt multinode-476309:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 ssh -n multinode-476309 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 cp multinode-476309:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3015337103/001/cp-test_multinode-476309.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 ssh -n multinode-476309 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 cp multinode-476309:/home/docker/cp-test.txt multinode-476309-m02:/home/docker/cp-test_multinode-476309_multinode-476309-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 ssh -n multinode-476309 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 ssh -n multinode-476309-m02 "sudo cat /home/docker/cp-test_multinode-476309_multinode-476309-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 cp multinode-476309:/home/docker/cp-test.txt multinode-476309-m03:/home/docker/cp-test_multinode-476309_multinode-476309-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 ssh -n multinode-476309 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 ssh -n multinode-476309-m03 "sudo cat /home/docker/cp-test_multinode-476309_multinode-476309-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 cp testdata/cp-test.txt multinode-476309-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 ssh -n multinode-476309-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 cp multinode-476309-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3015337103/001/cp-test_multinode-476309-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 ssh -n multinode-476309-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 cp multinode-476309-m02:/home/docker/cp-test.txt multinode-476309:/home/docker/cp-test_multinode-476309-m02_multinode-476309.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 ssh -n multinode-476309-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 ssh -n multinode-476309 "sudo cat /home/docker/cp-test_multinode-476309-m02_multinode-476309.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 cp multinode-476309-m02:/home/docker/cp-test.txt multinode-476309-m03:/home/docker/cp-test_multinode-476309-m02_multinode-476309-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 ssh -n multinode-476309-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 ssh -n multinode-476309-m03 "sudo cat /home/docker/cp-test_multinode-476309-m02_multinode-476309-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 cp testdata/cp-test.txt multinode-476309-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 ssh -n multinode-476309-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 cp multinode-476309-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3015337103/001/cp-test_multinode-476309-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 ssh -n multinode-476309-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 cp multinode-476309-m03:/home/docker/cp-test.txt multinode-476309:/home/docker/cp-test_multinode-476309-m03_multinode-476309.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 ssh -n multinode-476309-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 ssh -n multinode-476309 "sudo cat /home/docker/cp-test_multinode-476309-m03_multinode-476309.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 cp multinode-476309-m03:/home/docker/cp-test.txt multinode-476309-m02:/home/docker/cp-test_multinode-476309-m03_multinode-476309-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 ssh -n multinode-476309-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 ssh -n multinode-476309-m02 "sudo cat /home/docker/cp-test_multinode-476309-m03_multinode-476309-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.91s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-476309 node stop m03: (1.324416305s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-476309 status: exit status 7 (551.72295ms)

                                                
                                                
-- stdout --
	multinode-476309
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-476309-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-476309-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-476309 status --alsologtostderr: exit status 7 (583.007809ms)

                                                
                                                
-- stdout --
	multinode-476309
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-476309-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-476309-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:56:05.175819  397966 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:56:05.176019  397966 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:56:05.176034  397966 out.go:374] Setting ErrFile to fd 2...
	I1124 03:56:05.176041  397966 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:56:05.176343  397966 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 03:56:05.176574  397966 out.go:368] Setting JSON to false
	I1124 03:56:05.176606  397966 mustload.go:66] Loading cluster: multinode-476309
	I1124 03:56:05.176697  397966 notify.go:221] Checking for updates...
	I1124 03:56:05.177069  397966 config.go:182] Loaded profile config "multinode-476309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:56:05.177090  397966 status.go:174] checking status of multinode-476309 ...
	I1124 03:56:05.177667  397966 cli_runner.go:164] Run: docker container inspect multinode-476309 --format={{.State.Status}}
	I1124 03:56:05.198671  397966 status.go:371] multinode-476309 host status = "Running" (err=<nil>)
	I1124 03:56:05.198703  397966 host.go:66] Checking if "multinode-476309" exists ...
	I1124 03:56:05.198995  397966 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-476309
	I1124 03:56:05.228419  397966 host.go:66] Checking if "multinode-476309" exists ...
	I1124 03:56:05.228770  397966 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:56:05.228902  397966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-476309
	I1124 03:56:05.249392  397966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33275 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/multinode-476309/id_rsa Username:docker}
	I1124 03:56:05.356084  397966 ssh_runner.go:195] Run: systemctl --version
	I1124 03:56:05.362597  397966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:56:05.375557  397966 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:56:05.456764  397966 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-24 03:56:05.44663563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:56:05.457322  397966 kubeconfig.go:125] found "multinode-476309" server: "https://192.168.67.2:8443"
	I1124 03:56:05.457368  397966 api_server.go:166] Checking apiserver status ...
	I1124 03:56:05.457415  397966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:56:05.469402  397966 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1246/cgroup
	I1124 03:56:05.477736  397966 api_server.go:182] apiserver freezer: "11:freezer:/docker/2645533c864adc1a73eb514470304d51d60ad67c5b466a9d8da94e502803ae4a/crio/crio-f77d1363127d998bc744b79b7cb7c92e3f3336c26351fa440f3a270e6e8292ad"
	I1124 03:56:05.477805  397966 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2645533c864adc1a73eb514470304d51d60ad67c5b466a9d8da94e502803ae4a/crio/crio-f77d1363127d998bc744b79b7cb7c92e3f3336c26351fa440f3a270e6e8292ad/freezer.state
	I1124 03:56:05.485311  397966 api_server.go:204] freezer state: "THAWED"
	I1124 03:56:05.485344  397966 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1124 03:56:05.494005  397966 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1124 03:56:05.494041  397966 status.go:463] multinode-476309 apiserver status = Running (err=<nil>)
	I1124 03:56:05.494054  397966 status.go:176] multinode-476309 status: &{Name:multinode-476309 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 03:56:05.494101  397966 status.go:174] checking status of multinode-476309-m02 ...
	I1124 03:56:05.494441  397966 cli_runner.go:164] Run: docker container inspect multinode-476309-m02 --format={{.State.Status}}
	I1124 03:56:05.514348  397966 status.go:371] multinode-476309-m02 host status = "Running" (err=<nil>)
	I1124 03:56:05.514378  397966 host.go:66] Checking if "multinode-476309-m02" exists ...
	I1124 03:56:05.514738  397966 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-476309-m02
	I1124 03:56:05.531863  397966 host.go:66] Checking if "multinode-476309-m02" exists ...
	I1124 03:56:05.532181  397966 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:56:05.532237  397966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-476309-m02
	I1124 03:56:05.550363  397966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33281 SSHKeyPath:/home/jenkins/minikube-integration/21975-289526/.minikube/machines/multinode-476309-m02/id_rsa Username:docker}
	I1124 03:56:05.651813  397966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:56:05.665817  397966 status.go:176] multinode-476309-m02 status: &{Name:multinode-476309-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1124 03:56:05.665850  397966 status.go:174] checking status of multinode-476309-m03 ...
	I1124 03:56:05.666183  397966 cli_runner.go:164] Run: docker container inspect multinode-476309-m03 --format={{.State.Status}}
	I1124 03:56:05.685292  397966 status.go:371] multinode-476309-m03 host status = "Stopped" (err=<nil>)
	I1124 03:56:05.685313  397966 status.go:384] host is not running, skipping remaining checks
	I1124 03:56:05.685321  397966 status.go:176] multinode-476309-m03 status: &{Name:multinode-476309-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.46s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 node start m03 -v=5 --alsologtostderr
E1124 03:56:09.842044  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-476309 node start m03 -v=5 --alsologtostderr: (7.709571435s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.50s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (71.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-476309
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-476309
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-476309: (25.057328179s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-476309 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-476309 --wait=true -v=5 --alsologtostderr: (46.763702445s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-476309
--- PASS: TestMultiNode/serial/RestartKeepsNodes (71.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-476309 node delete m03: (4.983039178s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-476309 stop: (23.921426881s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-476309 status: exit status 7 (100.853125ms)

                                                
                                                
-- stdout --
	multinode-476309
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-476309-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-476309 status --alsologtostderr: exit status 7 (96.868625ms)

                                                
                                                
-- stdout --
	multinode-476309
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-476309-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:57:55.909502  405784 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:57:55.909740  405784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:57:55.909771  405784 out.go:374] Setting ErrFile to fd 2...
	I1124 03:57:55.909790  405784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:57:55.910068  405784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 03:57:55.910301  405784 out.go:368] Setting JSON to false
	I1124 03:57:55.910362  405784 mustload.go:66] Loading cluster: multinode-476309
	I1124 03:57:55.910449  405784 notify.go:221] Checking for updates...
	I1124 03:57:55.910857  405784 config.go:182] Loaded profile config "multinode-476309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:57:55.910895  405784 status.go:174] checking status of multinode-476309 ...
	I1124 03:57:55.911727  405784 cli_runner.go:164] Run: docker container inspect multinode-476309 --format={{.State.Status}}
	I1124 03:57:55.930254  405784 status.go:371] multinode-476309 host status = "Stopped" (err=<nil>)
	I1124 03:57:55.930275  405784 status.go:384] host is not running, skipping remaining checks
	I1124 03:57:55.930282  405784 status.go:176] multinode-476309 status: &{Name:multinode-476309 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 03:57:55.930309  405784 status.go:174] checking status of multinode-476309-m02 ...
	I1124 03:57:55.930691  405784 cli_runner.go:164] Run: docker container inspect multinode-476309-m02 --format={{.State.Status}}
	I1124 03:57:55.952061  405784 status.go:371] multinode-476309-m02 host status = "Stopped" (err=<nil>)
	I1124 03:57:55.952087  405784 status.go:384] host is not running, skipping remaining checks
	I1124 03:57:55.952095  405784 status.go:176] multinode-476309-m02 status: &{Name:multinode-476309-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (58.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-476309 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1124 03:58:24.396924  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/functional-666975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-476309 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (57.412860329s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-476309 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (58.10s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-476309
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-476309-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-476309-m02 --driver=docker  --container-runtime=crio: exit status 14 (91.077796ms)

                                                
                                                
-- stdout --
	* [multinode-476309-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-476309-m02' is duplicated with machine name 'multinode-476309-m02' in profile 'multinode-476309'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-476309-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-476309-m03 --driver=docker  --container-runtime=crio: (34.949838606s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-476309
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-476309: exit status 80 (328.059839ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-476309 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-476309-m03 already exists in multinode-476309-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-476309-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-476309-m03: (2.037626044s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.47s)

                                                
                                    
x
+
TestPreload (127.04s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-901156 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-901156 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m3.520187886s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-901156 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-901156 image pull gcr.io/k8s-minikube/busybox: (2.42213771s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-901156
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-901156: (5.969041346s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-901156 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1124 04:01:09.841872  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-901156 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (52.340474305s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-901156 image list
helpers_test.go:175: Cleaning up "test-preload-901156" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-901156
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-901156: (2.532351164s)
--- PASS: TestPreload (127.04s)

                                                
                                    
x
+
TestScheduledStopUnix (109.9s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-120038 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-120038 --memory=3072 --driver=docker  --container-runtime=crio: (33.705960219s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-120038 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 04:02:16.675244  419962 out.go:360] Setting OutFile to fd 1 ...
	I1124 04:02:16.675443  419962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:02:16.675473  419962 out.go:374] Setting ErrFile to fd 2...
	I1124 04:02:16.675493  419962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:02:16.675923  419962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 04:02:16.676288  419962 out.go:368] Setting JSON to false
	I1124 04:02:16.676463  419962 mustload.go:66] Loading cluster: scheduled-stop-120038
	I1124 04:02:16.677628  419962 config.go:182] Loaded profile config "scheduled-stop-120038": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:02:16.677867  419962 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/scheduled-stop-120038/config.json ...
	I1124 04:02:16.678126  419962 mustload.go:66] Loading cluster: scheduled-stop-120038
	I1124 04:02:16.678324  419962 config.go:182] Loaded profile config "scheduled-stop-120038": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-120038 -n scheduled-stop-120038
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-120038 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 04:02:17.161059  420050 out.go:360] Setting OutFile to fd 1 ...
	I1124 04:02:17.161179  420050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:02:17.161190  420050 out.go:374] Setting ErrFile to fd 2...
	I1124 04:02:17.161196  420050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:02:17.161453  420050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 04:02:17.161716  420050 out.go:368] Setting JSON to false
	I1124 04:02:17.162543  420050 daemonize_unix.go:73] killing process 419978 as it is an old scheduled stop
	I1124 04:02:17.166927  420050 mustload.go:66] Loading cluster: scheduled-stop-120038
	I1124 04:02:17.167389  420050 config.go:182] Loaded profile config "scheduled-stop-120038": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:02:17.167473  420050 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/scheduled-stop-120038/config.json ...
	I1124 04:02:17.167662  420050 mustload.go:66] Loading cluster: scheduled-stop-120038
	I1124 04:02:17.167787  420050 config.go:182] Loaded profile config "scheduled-stop-120038": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1124 04:02:17.174256  291389 retry.go:31] will retry after 112.713µs: open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/scheduled-stop-120038/pid: no such file or directory
I1124 04:02:17.175651  291389 retry.go:31] will retry after 131.845µs: open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/scheduled-stop-120038/pid: no such file or directory
I1124 04:02:17.176774  291389 retry.go:31] will retry after 257.112µs: open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/scheduled-stop-120038/pid: no such file or directory
I1124 04:02:17.177845  291389 retry.go:31] will retry after 229.047µs: open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/scheduled-stop-120038/pid: no such file or directory
I1124 04:02:17.178968  291389 retry.go:31] will retry after 725.896µs: open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/scheduled-stop-120038/pid: no such file or directory
I1124 04:02:17.180067  291389 retry.go:31] will retry after 387.782µs: open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/scheduled-stop-120038/pid: no such file or directory
I1124 04:02:17.181184  291389 retry.go:31] will retry after 1.556477ms: open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/scheduled-stop-120038/pid: no such file or directory
I1124 04:02:17.183393  291389 retry.go:31] will retry after 986.534µs: open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/scheduled-stop-120038/pid: no such file or directory
I1124 04:02:17.184503  291389 retry.go:31] will retry after 2.216886ms: open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/scheduled-stop-120038/pid: no such file or directory
I1124 04:02:17.187698  291389 retry.go:31] will retry after 4.957248ms: open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/scheduled-stop-120038/pid: no such file or directory
I1124 04:02:17.192937  291389 retry.go:31] will retry after 6.194825ms: open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/scheduled-stop-120038/pid: no such file or directory
I1124 04:02:17.200228  291389 retry.go:31] will retry after 11.318952ms: open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/scheduled-stop-120038/pid: no such file or directory
I1124 04:02:17.212453  291389 retry.go:31] will retry after 8.913714ms: open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/scheduled-stop-120038/pid: no such file or directory
I1124 04:02:17.221683  291389 retry.go:31] will retry after 25.440607ms: open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/scheduled-stop-120038/pid: no such file or directory
I1124 04:02:17.252524  291389 retry.go:31] will retry after 36.136224ms: open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/scheduled-stop-120038/pid: no such file or directory
I1124 04:02:17.289747  291389 retry.go:31] will retry after 43.912587ms: open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/scheduled-stop-120038/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-120038 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-120038 -n scheduled-stop-120038
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-120038
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-120038 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 04:02:43.169403  420412 out.go:360] Setting OutFile to fd 1 ...
	I1124 04:02:43.169575  420412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:02:43.169586  420412 out.go:374] Setting ErrFile to fd 2...
	I1124 04:02:43.169591  420412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:02:43.169882  420412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 04:02:43.170151  420412 out.go:368] Setting JSON to false
	I1124 04:02:43.170248  420412 mustload.go:66] Loading cluster: scheduled-stop-120038
	I1124 04:02:43.170874  420412 config.go:182] Loaded profile config "scheduled-stop-120038": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:02:43.171000  420412 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/scheduled-stop-120038/config.json ...
	I1124 04:02:43.171211  420412 mustload.go:66] Loading cluster: scheduled-stop-120038
	I1124 04:02:43.171333  420412 config.go:182] Loaded profile config "scheduled-stop-120038": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1124 04:03:24.399271  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/functional-666975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-120038
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-120038: exit status 7 (70.766407ms)

                                                
                                                
-- stdout --
	scheduled-stop-120038
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-120038 -n scheduled-stop-120038
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-120038 -n scheduled-stop-120038: exit status 7 (65.847032ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-120038" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-120038
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-120038: (4.48511248s)
--- PASS: TestScheduledStopUnix (109.90s)

                                                
                                    
x
+
TestInsufficientStorage (13.36s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-575375 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-575375 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.727322039s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2450416c-3531-4daf-8225-93a50e82ccb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-575375] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"40cd853b-a3b3-4bd8-98be-f925749de38d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21975"}}
	{"specversion":"1.0","id":"ff41a95d-a0fd-4f8d-93c3-b759ef4defb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"38720a5f-43df-43c1-82b1-bb1767320af3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig"}}
	{"specversion":"1.0","id":"cb0de9ce-dd55-4828-a8d4-02662a9b21ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube"}}
	{"specversion":"1.0","id":"a000b1ed-8cf4-413a-b0f8-b66410180c51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"73576391-566f-49a7-a3c9-5340b4b79073","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f784f1e1-223b-4469-be2c-22c33b2f7398","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c10cf60b-c7b4-4106-bf8a-0965ba523fb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"5e9ea929-65fd-41e3-9131-a84c4cb17e33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1787f4b8-2197-4539-bc88-e880a7aceac3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"052c8c16-3674-4d14-a71d-0b4d320b96bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-575375\" primary control-plane node in \"insufficient-storage-575375\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"63e569ee-cea7-4416-9bfd-05bfdc92bb8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763935653-21975 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"656c3234-6c57-418d-bd10-0d6670873fb6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"496d5b68-9aed-4025-a102-e8ffd82556f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-575375 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-575375 --output=json --layout=cluster: exit status 7 (310.506081ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-575375","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-575375","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1124 04:03:43.830116  422122 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-575375" does not appear in /home/jenkins/minikube-integration/21975-289526/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-575375 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-575375 --output=json --layout=cluster: exit status 7 (312.806304ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-575375","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-575375","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1124 04:03:44.142727  422189 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-575375" does not appear in /home/jenkins/minikube-integration/21975-289526/kubeconfig
	E1124 04:03:44.153409  422189 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/insufficient-storage-575375/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-575375" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-575375
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-575375: (2.005458915s)
--- PASS: TestInsufficientStorage (13.36s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (54.34s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3050420161 start -p running-upgrade-352504 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3050420161 start -p running-upgrade-352504 --memory=3072 --vm-driver=docker  --container-runtime=crio: (32.098010348s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-352504 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-352504 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.477252101s)
helpers_test.go:175: Cleaning up "running-upgrade-352504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-352504
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-352504: (2.037347049s)
--- PASS: TestRunningBinaryUpgrade (54.34s)

                                                
                                    
x
+
TestKubernetesUpgrade (366.73s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-207884 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-207884 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (45.133076009s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-207884
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-207884: (1.363635655s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-207884 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-207884 status --format={{.Host}}: exit status 7 (130.14851ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-207884 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1124 04:06:09.842253  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-207884 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m40.47524828s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-207884 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-207884 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-207884 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (158.849694ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-207884] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-207884
	    minikube start -p kubernetes-upgrade-207884 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2078842 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-207884 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-207884 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-207884 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.953723847s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-207884" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-207884
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-207884: (2.365569317s)
--- PASS: TestKubernetesUpgrade (366.73s)

                                                
                                    
x
+
TestMissingContainerUpgrade (127.81s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3168382991 start -p missing-upgrade-935894 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3168382991 start -p missing-upgrade-935894 --memory=3072 --driver=docker  --container-runtime=crio: (1m8.034376928s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-935894
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-935894
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-935894 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-935894 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (55.277350294s)
helpers_test.go:175: Cleaning up "missing-upgrade-935894" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-935894
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-935894: (2.342079107s)
--- PASS: TestMissingContainerUpgrade (127.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-314310 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-314310 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (91.744575ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-314310] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (46.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-314310 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-314310 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (46.040746615s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-314310 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (46.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (15.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-314310 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-314310 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (12.455019438s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-314310 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-314310 status -o json: exit status 2 (483.771065ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-314310","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-314310
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-314310: (2.423179276s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (15.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-314310 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-314310 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.263544961s)
--- PASS: TestNoKubernetes/serial/Start (9.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21975-289526/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-314310 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-314310 "sudo systemctl is-active --quiet service kubelet": exit status 1 (272.723337ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-314310
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-314310: (1.304236498s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-314310 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-314310 --driver=docker  --container-runtime=crio: (7.086386027s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-314310 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-314310 "sudo systemctl is-active --quiet service kubelet": exit status 1 (274.151264ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (57.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2577890309 start -p stopped-upgrade-191757 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2577890309 start -p stopped-upgrade-191757 --memory=3072 --vm-driver=docker  --container-runtime=crio: (39.108333148s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2577890309 -p stopped-upgrade-191757 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2577890309 -p stopped-upgrade-191757 stop: (1.242060603s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-191757 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-191757 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.629093316s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (57.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-191757
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-191757: (1.145171074s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                    
x
+
TestPause/serial/Start (82.5s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-396108 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1124 04:08:24.397085  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/functional-666975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-396108 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m22.499304836s)
--- PASS: TestPause/serial/Start (82.50s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.16s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-396108 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-396108 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.120631485s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-778509 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-778509 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (193.518977ms)

                                                
                                                
-- stdout --
	* [false-778509] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 04:10:44.779309  460424 out.go:360] Setting OutFile to fd 1 ...
	I1124 04:10:44.779463  460424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:10:44.779494  460424 out.go:374] Setting ErrFile to fd 2...
	I1124 04:10:44.779508  460424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 04:10:44.779894  460424 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-289526/.minikube/bin
	I1124 04:10:44.780892  460424 out.go:368] Setting JSON to false
	I1124 04:10:44.781813  460424 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10374,"bootTime":1763947071,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 04:10:44.781924  460424 start.go:143] virtualization:  
	I1124 04:10:44.785454  460424 out.go:179] * [false-778509] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 04:10:44.788776  460424 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 04:10:44.788947  460424 notify.go:221] Checking for updates...
	I1124 04:10:44.792512  460424 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 04:10:44.795307  460424 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-289526/kubeconfig
	I1124 04:10:44.798135  460424 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-289526/.minikube
	I1124 04:10:44.801140  460424 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 04:10:44.804204  460424 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 04:10:44.807795  460424 config.go:182] Loaded profile config "kubernetes-upgrade-207884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 04:10:44.807904  460424 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 04:10:44.832413  460424 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 04:10:44.832548  460424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 04:10:44.900141  460424 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-24 04:10:44.889939007 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 04:10:44.900253  460424 docker.go:319] overlay module found
	I1124 04:10:44.905323  460424 out.go:179] * Using the docker driver based on user configuration
	I1124 04:10:44.908093  460424 start.go:309] selected driver: docker
	I1124 04:10:44.908117  460424 start.go:927] validating driver "docker" against <nil>
	I1124 04:10:44.908132  460424 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 04:10:44.911804  460424 out.go:203] 
	W1124 04:10:44.914616  460424 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1124 04:10:44.917448  460424 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-778509 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-778509

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-778509

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-778509

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-778509

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-778509

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-778509

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-778509

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-778509

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-778509

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-778509

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-778509

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-778509" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-778509" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 04:10:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-207884
contexts:
- context:
cluster: kubernetes-upgrade-207884
extensions:
- extension:
last-update: Mon, 24 Nov 2025 04:10:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-207884
name: kubernetes-upgrade-207884
current-context: kubernetes-upgrade-207884
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-207884
user:
client-certificate: /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/kubernetes-upgrade-207884/client.crt
client-key: /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/kubernetes-upgrade-207884/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-778509

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-778509"

                                                
                                                
----------------------- debugLogs end: false-778509 [took: 3.661405762s] --------------------------------
helpers_test.go:175: Cleaning up "false-778509" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-778509
--- PASS: TestNetworkPlugins/group/false (4.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (61.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-762702 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-762702 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m1.252209781s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (61.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-762702 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9b6392ee-0350-4790-80de-baef7e6db4f3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9b6392ee-0350-4790-80de-baef7e6db4f3] Running
E1124 04:13:24.397034  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/functional-666975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003882823s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-762702 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-762702 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-762702 --alsologtostderr -v=3: (12.060806369s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-762702 -n old-k8s-version-762702
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-762702 -n old-k8s-version-762702: exit status 7 (71.051999ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-762702 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (50.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-762702 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-762702 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (49.982441256s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-762702 -n old-k8s-version-762702
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (50.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-tzxjs" [6cadf852-5cc2-4f08-ad93-6d8f2962ce1e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003269205s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-tzxjs" [6cadf852-5cc2-4f08-ad93-6d8f2962ce1e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003705513s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-762702 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-762702 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (75.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-600301 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-600301 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m15.880351758s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (75.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-520529 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1124 04:16:09.842246  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-520529 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m26.243040572s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-600301 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2e198013-8c34-4d24-aef2-be30a7043011] Pending
helpers_test.go:352: "busybox" [2e198013-8c34-4d24-aef2-be30a7043011] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2e198013-8c34-4d24-aef2-be30a7043011] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003870843s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-600301 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-600301 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-600301 --alsologtostderr -v=3: (12.043004577s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-600301 -n no-preload-600301
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-600301 -n no-preload-600301: exit status 7 (94.337604ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-600301 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (53.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-600301 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-600301 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (53.138744336s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-600301 -n no-preload-600301
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (53.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-520529 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [29a8cb8a-6390-49d0-a8b7-1a3f51501ad7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [29a8cb8a-6390-49d0-a8b7-1a3f51501ad7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.005900585s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-520529 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-520529 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-520529 --alsologtostderr -v=3: (12.280477946s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-520529 -n embed-certs-520529
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-520529 -n embed-certs-520529: exit status 7 (83.96181ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-520529 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (54.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-520529 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-520529 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (54.418492719s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-520529 -n embed-certs-520529
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (54.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-q7r6n" [227ac6ea-3301-4d65-9e93-b547bcee96bc] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002929749s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-q7r6n" [227ac6ea-3301-4d65-9e93-b547bcee96bc] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0034481s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-600301 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-600301 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-303179 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-303179 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m21.213264015s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ddq4w" [532d8426-c95e-41c5-9b89-a994820a332b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003307961s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ddq4w" [532d8426-c95e-41c5-9b89-a994820a332b] Running
E1124 04:18:17.918357  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:18:17.924695  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:18:17.936028  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:18:17.957342  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:18:17.998692  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:18:18.080084  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:18:18.241554  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:18:18.563487  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:18:19.205379  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:18:20.487314  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004047835s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-520529 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-520529 image list --format=json
E1124 04:18:23.049421  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-543467 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1124 04:18:38.413413  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:18:58.894777  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-543467 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (39.090310797s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-303179 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [aa301f34-6d9a-43c3-879c-d900c3ba9020] Pending
helpers_test.go:352: "busybox" [aa301f34-6d9a-43c3-879c-d900c3ba9020] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [aa301f34-6d9a-43c3-879c-d900c3ba9020] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003848631s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-303179 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-543467 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-543467 --alsologtostderr -v=3: (1.343491743s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-543467 -n newest-cni-543467
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-543467 -n newest-cni-543467: exit status 7 (72.220298ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-543467 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-543467 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-543467 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (15.350692893s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-543467 -n newest-cni-543467
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-303179 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-303179 --alsologtostderr -v=3: (12.303186885s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-543467 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-303179 -n default-k8s-diff-port-303179
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-303179 -n default-k8s-diff-port-303179: exit status 7 (99.921436ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-303179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (56.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-303179 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-303179 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (56.070389566s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-303179 -n default-k8s-diff-port-303179
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (56.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (84.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-778509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-778509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m24.907739352s)
--- PASS: TestNetworkPlugins/group/auto/Start (84.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kxt5z" [f27d2dc7-02aa-4c7f-ad0d-2780a4cbead8] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003377425s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kxt5z" [f27d2dc7-02aa-4c7f-ad0d-2780a4cbead8] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005187654s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-303179 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-303179 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (80.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-778509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1124 04:21:01.778669  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-778509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m20.795993679s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (80.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-778509 "pgrep -a kubelet"
I1124 04:21:06.925554  291389 config.go:182] Loaded profile config "auto-778509": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-778509 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kd5rh" [44f9236f-a912-4a23-8e41-200de1306e66] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1124 04:21:09.842789  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/addons-153780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:21:10.318188  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:21:10.324472  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:21:10.335776  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:21:10.357115  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:21:10.398896  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:21:10.480239  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:21:10.641674  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:21:10.963291  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:21:11.605023  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:21:12.886730  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-kd5rh" [44f9236f-a912-4a23-8e41-200de1306e66] Running
E1124 04:21:15.448666  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.003473482s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-778509 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-778509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-778509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (59.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-778509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1124 04:21:51.293903  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-778509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (59.660786002s)
--- PASS: TestNetworkPlugins/group/calico/Start (59.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-d5qmt" [4db34e82-d85f-4e39-aa32-570aabe49930] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003786686s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-778509 "pgrep -a kubelet"
I1124 04:22:21.273735  291389 config.go:182] Loaded profile config "kindnet-778509": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (14.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-778509 replace --force -f testdata/netcat-deployment.yaml
I1124 04:22:21.607164  291389 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cxxs2" [7f0bc352-d1e6-4b0c-9d25-5e0db37684bf] Pending
helpers_test.go:352: "netcat-cd4db9dbf-cxxs2" [7f0bc352-d1e6-4b0c-9d25-5e0db37684bf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-cxxs2" [7f0bc352-d1e6-4b0c-9d25-5e0db37684bf] Running
E1124 04:22:32.255487  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 14.011917221s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (14.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-778509 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-778509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-778509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-gwd59" [b4dcea7c-3f08-451e-82b5-c227f6c5cdf4] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-gwd59" [b4dcea7c-3f08-451e-82b5-c227f6c5cdf4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004398117s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-778509 "pgrep -a kubelet"
I1124 04:22:48.377939  291389 config.go:182] Loaded profile config "calico-778509": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-778509 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jddld" [2c6629ac-9569-4817-83c5-d65b40e3fbe2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jddld" [2c6629ac-9569-4817-83c5-d65b40e3fbe2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003269666s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (65.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-778509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-778509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m5.706764196s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (65.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-778509 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-778509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-778509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (80.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-778509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1124 04:23:45.620127  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/old-k8s-version-762702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:23:54.177756  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/no-preload-600301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-778509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m20.238259888s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (80.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-778509 "pgrep -a kubelet"
I1124 04:24:05.817087  291389 config.go:182] Loaded profile config "custom-flannel-778509": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-778509 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-r2sxs" [ae47fe8e-9aad-4a5a-8654-5fbb36f5b098] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-r2sxs" [ae47fe8e-9aad-4a5a-8654-5fbb36f5b098] Running
E1124 04:24:12.112453  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:24:12.118949  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:24:12.130418  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:24:12.151925  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:24:12.194060  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:24:12.275493  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:24:12.437040  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:24:12.758709  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:24:13.400787  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 04:24:14.682884  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003103535s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-778509 exec deployment/netcat -- nslookup kubernetes.default
E1124 04:24:17.244324  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-778509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-778509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (64.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-778509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-778509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m4.973301135s)
--- PASS: TestNetworkPlugins/group/flannel/Start (64.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-778509 "pgrep -a kubelet"
I1124 04:24:46.902753  291389 config.go:182] Loaded profile config "enable-default-cni-778509": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-778509 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-f6hkm" [b7b24f4a-a8d4-45ec-ad66-830c6cf20c94] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-f6hkm" [b7b24f4a-a8d4-45ec-ad66-830c6cf20c94] Running
E1124 04:24:53.089922  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003245657s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-778509 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-778509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-778509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (74.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-778509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1124 04:25:34.051672  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/default-k8s-diff-port-303179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-778509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m14.873971114s)
--- PASS: TestNetworkPlugins/group/bridge/Start (74.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-7p62z" [3d8dfc2e-2267-49af-995e-d0b003f04278] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004598871s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-778509 "pgrep -a kubelet"
I1124 04:25:50.598736  291389 config.go:182] Loaded profile config "flannel-778509": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-778509 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-h4bl4" [f37c318e-7814-46ba-a8d7-37f981a7fe24] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-h4bl4" [f37c318e-7814-46ba-a8d7-37f981a7fe24] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003566259s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-778509 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-778509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-778509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-778509 "pgrep -a kubelet"
I1124 04:26:38.949191  291389 config.go:182] Loaded profile config "bridge-778509": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-778509 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qhjng" [182a91f7-82af-4135-991c-91095f2464e4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qhjng" [182a91f7-82af-4135-991c-91095f2464e4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004023127s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-778509 exec deployment/netcat -- nslookup kubernetes.default
E1124 04:26:48.232664  291389 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/auto-778509/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-778509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-778509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (31/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-545793 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-545793" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-545793
--- SKIP: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-995056" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-995056
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-778509 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-778509

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-778509

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-778509

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-778509

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-778509

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-778509

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-778509

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-778509

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-778509

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-778509

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-778509

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-778509" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-778509" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 04:10:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-207884
contexts:
- context:
cluster: kubernetes-upgrade-207884
extensions:
- extension:
last-update: Mon, 24 Nov 2025 04:10:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-207884
name: kubernetes-upgrade-207884
current-context: kubernetes-upgrade-207884
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-207884
user:
client-certificate: /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/kubernetes-upgrade-207884/client.crt
client-key: /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/kubernetes-upgrade-207884/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-778509

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-778509"

                                                
                                                
----------------------- debugLogs end: kubenet-778509 [took: 5.229022647s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-778509" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-778509
--- SKIP: TestNetworkPlugins/group/kubenet (5.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-778509 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-778509

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-778509

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-778509

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-778509

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-778509

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-778509

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-778509

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-778509

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-778509

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-778509

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-778509

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-778509" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-778509

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-778509

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-778509

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-778509

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-778509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-778509" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21975-289526/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 04:10:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-207884
contexts:
- context:
cluster: kubernetes-upgrade-207884
extensions:
- extension:
last-update: Mon, 24 Nov 2025 04:10:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-207884
name: kubernetes-upgrade-207884
current-context: kubernetes-upgrade-207884
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-207884
user:
client-certificate: /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/kubernetes-upgrade-207884/client.crt
client-key: /home/jenkins/minikube-integration/21975-289526/.minikube/profiles/kubernetes-upgrade-207884/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-778509

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-778509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-778509"

                                                
                                                
----------------------- debugLogs end: cilium-778509 [took: 3.852473065s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-778509" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-778509
--- SKIP: TestNetworkPlugins/group/cilium (4.02s)

                                                
                                    
Copied to clipboard